Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation

03/12/2020 ∙ by Haiyan Yin, et al. ∙ Baidu, Inc. 0

Training generative models that can generate high-quality text with sufficient diversity is an important open problem for Natural Language Generation (NLG) community. Recently, generative adversarial models have been applied extensively on text generation tasks, where the adversarially trained generators alleviate the exposure bias experienced by conventional maximum likelihood approaches and result in promising generation quality. However, due to the notorious defect of mode collapse for adversarial training, the adversarially trained generators face a quality-diversity trade-off, i.e., the generator models tend to sacrifice generation diversity severely for increasing generation quality. In this paper, we propose a novel approach which aims to improve the performance of adversarial text generation via efficiently decelerating mode collapse of the adversarial training. To this end, we introduce a cooperative training paradigm, where a language model is cooperatively trained with the generator and we utilize the language model to efficiently shape the data distribution of the generator against mode collapse. Moreover, instead of engaging the cooperative update for the generator in a principled way, we formulate a meta learning mechanism, where the cooperative update to the generator serves as a high level meta task, with an intuition of ensuring the parameters of the generator after the adversarial update would stay resistant against mode collapse. In the experiment, we demonstrate our proposed approach can efficiently slow down the pace of mode collapse for the adversarial text generators. Overall, our proposed method is able to outperform the baseline approaches with significant margins in terms of both generation quality and diversity in the testified domains.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative models are trained to learn the true data distribution from the training set and are capable of generating new data points when the training is completed. In recent years, they have been successfully applied to a wide range of applications, including image generation [2], stylization [32], semi-supervised classification [26], and natural language generation [3, 20, 29], etc. In this paper, we tackle the emerging task of text generation, which is typically modeled as a sequential discrete data generation process [16]. Such tasks play a pivot role in many real world applications, such as machine translation [30]

, text summarization 

[19, 22], and dialogue systems [33, 18].

The training of sequential text generation models has been greatly relying on applying teacher forcing

over autoregressive models, i.e., optimizing with maximum likelihood estimation (MLE) 

[5]. However, training the generative models with teacher forcing would suffer from exposure bias [27], i.e., the models are fed to their predicted data rather than the ground-truth data at inference time and thus result in generating poor samples due to the accumulated error. To address the exposure bias

issue, a major on-going research for text generation centers on utilizing adversarial training techniques to derive better text generation models. Generally, such attempts could be classified into the following two strands 

[6]: the first line of approaches combine generative adversarial network (GAN) [10]

with reinforcement learning (RL), denoted as RL-based; the second line of approaches solely play the two-player adversarial game without using RL, denoted as RL-free.

Both RL-based and RL-free text generation approaches suffer from mode collapse, a notoriously known challenge for training GAN-based models [2]. That is, as the adversarial training progresses, the generated distribution tends to contrast towards generating subset of modes for the data. As a result, the generator outputs repeated sentences and thus no longer expressively represents the data generating distribution. Such effect has been quantitatively evaluated in a recent study, which shows that the entropy of the generator’s output distribution would experience a clear drop when moving from MLE training to adversarial training phase [4]. To derive better text generation models with GAN-based techniques, one critical thing is to achieve a better quality-diversity trade-off by efficiently slowing down the mode collapse of the adversarial generator, i.e., to let the generator get abundant gradient information from adversarial update for making its output more real (i.e., improve quality) while bearing with small mode collapse effect (i.e., decrease diversity). However, limited number of existing RL-based or RL-free approaches explicitly consider dealing with mode collapse of GAN training. In this work, we propose a cooperative training mechanism which explicitly tackles the challenge of mode collapse for adversarial training, resulting in an improved text generation model.

Overall, the contributions of this paper are three-fold. Firstly, we propose a novel cooperative training approach where we utilize a language model to efficiently shape the output distribution of the adversarial text generator. Our proposed approach could efficiently slow down the mode collapse of the adversarial text generator and thus lead the text generation towards a better quality-diversity trade-off. Secondly, to optimize the cooperative training loss for the generator, we propose a novel meta-learning mechanism. In our setting, the cooperative training task serves as a meta task and the adversarial training serves as a base task. Thus, our proposed approach ensures that the generator parameters after the adversarial update would be resistant for mode collapse. Thirdly, we conduct extensive experiments on synthetic and real-world datasets to demonstrate that our proposed approach is able to produce better text generation models in terms of both the quality and the diversity.

2 Related Work

Besides the conventional approaches of training language models with teacher forcing

, today’s approaches for text generation could be generally classified as RL-based or RL-free approaches. Most RL-based approaches formulate text generation as a Markov Decision Process (MDP). Often, the generator is updated by policy gradient algorithm 

[31] or its variants using reward signals derived from GAN’s discriminator. Prominent examples for this type of approaches include SeqGAN [37], RankGAN [21], LeakGAN [11] and MaskGAN [8]

. The noisy reward signals derived from the discriminator model makes such RL-based models suffer from high-variance gradients to update the generator’s parameters. Besides high-variance of gradient, the RL-based approaches also face the difficulties brought by partial sequence evaluation, slow learning, and sensitive hyperparameters 

[4]. Considering such challenges for the RL-based approaches, in this work, our proposed method resides in, but not restricted to, the category of RL-free approach for text generation. Prominent examples of RL-free approaches include TextGAN [38], FM-GAN [6], GSGAN [15], and RelGAN [25]. Such approaches feed the generator with low variance gradient and often lead to more stable training.

Most of the adversarial text generation models are firstly pretrained by MLE, and then are continuously optimized by adversarial training under either RL-based or RL-free mechanism. When switched from MLE training to adversarial training phase, the generator models for both RL-based and RL-free approaches would suffer from mode collapse issue. In this work, our core intuition is to utilize a cooperatively trained language model to decelerate the mode collapse of adversarial training. Such intuition of utilizing language model to facilitate adversarial text generation aligns with the works proposed in [35, 24]. In  [35]

, the discriminator for adversarial training is modeled as a language model, which maximizes the probability for real data and minimizes that for generated data. Furthermore, the output derived from the language model is adopted as reward signal to promote generation diversity under an RL-based set-up. Our work is mostly related to the cooperative training method proposed in 

[23], where a language model is trained online to offer a target distribution for minimizing the Jensen-Shannon divergence between the real data distribution and the generated distribution. In our work, we adopt a similar strategy to train the language model, but the cooperative training for the generator model is different from [23]. Furthermore, we propose a distinct meta learning set-up to optimize the cooperative training loss for the generator. To the best of our knowledge, our work is the first attempt that adopts meta learning on text generation GANs.

3 Preliminaries

The task of text generation is typically modelled as sequential discrete data generation process. Let be the data points drawn from an underlying data generating distribution . Each data point is represented as a sequence of discrete tokens: , where denotes the -th token and denotes the length of the sequence. Let denote the generator model parameterized by . Conventional text generation approaches typically train a language model with maximum likelihood estimation (MLE) as follows:

where the probability of each sequence x is represented in an autoregressive manner:

with denoting the sequence of previous tokens .

The approaches utilizing GANs for text generation attempt to play a two-player game between the generator and a discriminator . Let the discriminator be parameterized by . Under the adversarial set-up, the generator is trained to generate realistic sentences given samples from , and the discriminator attempts to distinguish between ’s generating distribution and the real data distribution . Thus, the above mentioned process could be formulated as an adversarial training mechanism as follows:

(1)

where the generator and discriminator attempt to minimize and maximize the function, respectively. We denote the adversarial loss in (1) in terms of the generator model and the discriminator model as and , respectively.

With the autoregressive generation process, the -th token is generated by sampling from the generator’s output distribution, conditioned on its previous tokens

. Performing such sampling introduces considerable difficulty for the generator to utilize the discriminator’s prediction outcome. That is, the backpropagation route for adversarial loss, i.e.,

becomes non differentiable w.r.t. the generator’s parameters , since would be zero due to the sampling. To overcome the above issue, the RL-based approaches mostly rely on the REINFORCE algorithm [34] or its variants to derive the gradient to optimize the generator, where the discriminator’s predictions could be utilized to derive reward signals. The RL-free approaches often relax the non-differentiable sampling function by some continuous approximations, such as soft-argmax [38] or gumbel-softmax [13]. In this paper, our proposed approach adopts the gumbel-softmax relaxation which models the effect of sampling as introducing noise to the input so that the outputs become continuous and differentiable. Specifically, the noise is modeled by a Gumbel distribution, which is formed as follows:

where denotes the Gumbel noise to be applied to the

-th logits. With the Gumbel noise, the token for next step

is derived in a deterministic manner:

where denotes the logits output by the generator for sampling token , and denotes vocabulary size. To make the discriminator’s loss differentiable, the argmax operator is replaced by a softmax function , i.e., , where is a real-valued temperature hyperparameter, with .

4 Methodology

Language generators trained with adversarial training mechanism (both RL-based and RL-free approaches) suffer from mode collapse when switched from teacher forcing to the adversarial training phase. In this section, we introduce a novel meta cooperative training algorithm to overcome such challenges. Overall, our objective is to achieve a better quality-diversity trade-off for the language generators via decelerating mode collapse of their adversarial training. That is, the algorithm allows the generator to get abundant gradient information from the adversarial training for increasing generation quality, while sacrificing little in terms of generation diversity. Overall, we engage a language model to decelerate the mode collapse of the generator’s output distribution. The language model is cooperatively trained with the generator during adversarial training. We utilize the output of language model over samples from real data distribution to shape the generator’s output distribution. Furthermore, the supervision is formulated with a meta optimization setup.

Figure 1: Depiction for our proposed cooperative training mechanism. The generator trained with adversarial training tend to suffer from mode collapse (dark blue arrows). We engage a language model to supervise the data distribution of to decelerate mode collapse (yellow arrows). The language model is trained from a mixtured distribution of samples from and . The supervision from language model to the language generator works on samples from . The generator is updated by the adversarial loss and the cooperative training loss.

4.1 Cooperative Training Formulation

We introduce a cooperative training paradigm that engages an interleaved training procedure for an adversarial generator , an adversarial discriminator , and a language model , where denotes the parameters for the language model. Figure 1 depicts a high-level overview for the proposed cooperative training procedure. When the generator is trained by the adversarial loss, its generation diversity would progressively decrease for increasing the generation quality due to mode collapse issue. To overcome such challenge, we cooperatively train a language model . The language model would pose a supervision over ’s output distribution towards preserving desirable generation probability for the real data.

During the cooperative training process, the language model is optimized consistently by MLE loss. To offer a smoothly changing target distribution for the generator, it is trained with data from a mixture distribution with balanced samples from real data and generated data, i.e., . Formally, the cooperative training loss for updating the language model with MLE is defined in (2). It could be interpreted as minimizing the direct KL divergence between and an optimal mixture density model which has a distribution of .

(2)

Consistently updating the language model with samples from real data and using the teacher forcing loss makes it experience mild mode collapse effect. Thus, its output predictions could offer an effective supervision over the generator ’s output distribution for decelerating mode collapse. Moreover, updating with the mixture distribution, compared to only using the real data distribution, would offer a target distribution that is smoothly changing towards the generator’s update, which turns out to be more beneficial. Formally, the cooperative training loss for the generator model is proposed as follows,

(3)

where is the -th token from the sequence x. Thus, the KL-loss distills the output distribution given by the language model to the generator [12, 28, 36]. When considering the mode collapse, we would only be interested in preserving the distribution for the real data from , rather than those from . Therefore, when optimizing (3), we only adopt samples from the real data distribution to compute the KL-loss. With the above cooperative training loss, the gradient for updating the generator’s parameters is derived as follows,

As such, the effect of applying cooperative training on the generator is equivalent to increasing the density of the real data in a weighted manner.

4.2 Meta Cooperative Optimization

In this section, we introduce a meta learning paradigm to interleave the optimization of the adversarial training loss and the cooperative training loss for the generator model parameters. Unlike the conventional meta-learning approaches that work on achieving faster learning [9], task generalization [17] or deriving adaptive models [1], our intuition is to preserve the generative distribution for the adversarial text generator model to decelerate its mode collapse.

To this end, optimizing the adversarial loss is modelled as a base task, and optimizing the cooperative training loss is modeled as the meta task. With such setting, the meta optimization scheme ensures that after optimizing the generator parameters with the adversarial training loss for increasing generation quality, the resultant parameters would demonstrate considerable resistance towards mode collapse, i.e., increasing generation quality while preserving considerable generation diversity.

Formally, we first perform one gradient update on the generator parameters by optimizing the base task loss:

Then, we obtain new samples from the real data distribution: and inference the meta-loss for the real samples on the updated parameters . The meta gradient is weighted by and added to the base task gradient to update the parameters . Finally, the adversarial update under our proposed meta cooperative training paradigm could be formulated as below:

The full algorithm for meta cooperative training is presented in Algorithm 1.

1:, learning rate , training data distribution
2:Generator
3:
4:Randomly initialize , ,
5:Pretrain with samples from
6:Assign the weight from to
7:while not done do
8:     Sample
9:     Generate with
10:     Compute adversarial loss
11:     
12:     Compute with language model
13:      Compute meta gradient
14:      Generator update
15:      Discriminator update
16:      Language model update
17:end while
18:return Generator
Algorithm 1 Meta Cooperative Training

5 Experiments

We denote our proposed meta cooperative training generative adversarial networks as Meta-CoTGAN. In the experiment section, first, we compare our proposed algorithm with its closest cooperative training counterpart, CoT [23] on the synthetic dataset. Then we show the comparison result between our method and several RL-based and RL-free approaches on two commonly used real-world text generation datasets: COCO Image Captions [7] and EMNLP 2017 WMT News 111http://statmt.org/wmt17/translation-task.html.

Implementation Details

We implement our proposed algorithm on top of RelGAN [25], an RL-free adversarial text generation model that is among the state-of-the-art approaches. Specifically, RelGAN adopts a relational memory to model the long-distance dependencies among the input tokens, and a gumbel-softmax relaxation to overcome the non-differentiable issue in the generator training. The relational memory adopts 1 memory slot, multi-head attention with 2 heads, and the attention key size is set to be 512. The language model for cooperative training adopts the identical network architecture as the generator, and the weights for the generator’s parameters are assigned to the language model after pretraining. The discriminator adopts multiple representations with size to be 64. We adopt Adam [14] as the optimization algorithm for updating all the model parameters. The source code of our framework is based on PaddlePaddle222https://www.paddlepaddle.org.cn/ platform.

Evaluation Metrics

For comparison, we evaluate the models in terms of sample quality and sample diversity simultaneously. Following most of today’s text generation works (e.g.,  [37, 23]), the sample quality is evaluated by the BLEU score metrics when testified on real datasets, and loss when testified on the synthetic dataset. The loss is defined as the negative log likelihood derived from the target LSTM model for the data generated by . The sample diversity is evaluated in terms of loss, which is in the following form:

where the density of the real data is evaluated on the generator model. Thus, models with better sample diversity would have a broader coverage over the real data space and result in lower loss. Models that suffer from severe mode collapse would no longer represent the real data well and result in higher loss.

Baseline Models

To evaluate the efficiency of our proposed approach, we consider MLE as well as the RL-based baselines, including SeqGAN [37], RankGAN [21] and LeakGAN [11]. Also, we compare with the most related RL-free baseline RelGAN [25]. During evaluation, we follow the temperature settings proposed in RelGAN and present the results for our method when evaluated with temperature values of and , respectively.

5.1 Synthetic Dataset

Our first evaluation domain is the synthetic oracle dataset, which is first proposed in [37]. The experiment engages a randomly initialized LSTM model as the target model to simulate real-world sequences and generate data from real data distribution. The synthetic experiments are conducted with the sequence length set to be 20. The objective for experimenting in this domain is to compare our proposed method with its closest cooperative training counterpart CoT. While these two models adopt same way to train the language model, we investigate on the efficiency of adopting the respective cooperative training losses on the generator model as proposed in these two methods.

Figure 2: Evaluation result on synthetic oracle with length 20 in terms of loss. Overall, our model could converge to significantly better standard than CoT.

We demonstrate the learning curves for loss in Figure 2. Note that CoT takes no pretraining stage and its loss progressively decreases. Our method takes a pretraining stage and the loss decreases in both the pretraining stage and the adversarial training stage. We could notice that upon convergence, the loss for our method is significantly lower than CoT. This demonstrates that the cooperative training mechanism proposed by CoT is not comparable to our method in terms of sample quality. We also present the evaluation scores for and in Table 1. When comparing , our method could achieve much lower loss scale than CoT. This demonstrates that our proposed algorithm convey greater efficiency in preserving the sample diversity. Overall, considering the inferior performance and long training time of this model, we do not consider it further in the following real-world dataset experiments.

Method
CoT 8.19 7.54
Meta-CoTGAN 7.69 6.86
Table 1: Evaluation result on synthetic oracle with sequence length 20. For CoT, we present their best score for .
Method BLEU-2 BLEU-3 BLEU-4 BLEU-5
MLE 0.731 0.497 0.305 0.189 0.718
SeqGAN 0.745 0.498 0.294 0.180 1.082
RankGAN 0.743 0.467 0.264 0.156 1.344
LeakGAN 0.746 0.528 0.355 0.230 0.679
RelGAN (100) 0.849 0.030 0.687 0.047 0.502 0.048 0.331 0.044 0.756 0.054
RelGAN (1000) 0.814 0.012 0.634 0.020 0.455 0.023 0.303 0.020 0.655 0.048
Meta-CoTGAN (100) 0.858 0.003 0.692 0.005 0.518 0.007 0.363 0.009 0.578 0.036
Meta-CoTGAN (1000) 0.842 0.011 0.675 0.019 0.502 0.026 0.349 0.024 0.583 0.028
Table 2: Evaluations on COCO Image Captions dataset. For RelGAN and Meta-CoTGAN, the temperature (in parentheses) is set to be 100 and 1000, and results are averaged over 6 runs (random seeds). For (last column), the smaller the better.
Method BLEU-2 BLEU-3 BLEU-4 BLEU-5
MLE 0.768 0.473 0.240 0.126 2.382
SeqGAN 0.777 0.491 0.261 0.138 2.773
RankGAN 0.727 0.435 0.209 0.101 3.345
LeakGAN 0.826 0.645 0.437 0.272 2.356
RelGAN (100) 0.881 0.013 0.705 0.019 0.501 0.023 0.319 0.018 2.482 0.031
RelGAN (1000) 0.837 0.012 0.654 0.010 0.435 0.011 0.265 0.011 2.285 0.025
Meta-CoTGAN (100) 0.882 0.014 0.734 0.017 0.542 0.016 0.358 0.015 2.299 0.011
Meta-CoTGAN (1000) 0.868 0.015 0.703 0.014 0.500 0.016 0.318 0.016 2.205 0.053
Table 3: Evaluations on EMNLP2017 WMT News dataset. See the caption of Table 2 for more details.

5.2 COCO Image Captions Dataset

Our second evaluation domain is the COCO Image Captions dataset. We follow the pre-processing method proposed in [39]. The training and testing set consist of 10, 000 sentences respectively. The sentences in COCO have minimum length of 7 and maximum length of 37. The vocabulary size is 4,682.

We present the scores of BLEU-2 to BLEU-5 for measuring sample quality, and the score for measuring sample diversity in Table 2. Overall, our method demonstrates significant advantage over all the sample quality/diversity metrics. Notably, our method leads to loss significantly lower than the other baseline approaches. This indicates that our method could provide an efficient control over the mode collapse for the adversarial training and eventually leads to superior sample diversity. While decelerating the mode collapse, the cooperative training could result in model with better sample quality as well.

Method BLEU-2 BLEU-3 BLEU-4 BLEU-5
RelGAN (100) 0.849 0.030 0.687 0.047 0.502 0.048 0.331 0.044 0.756 0.054
Meta-CoTGAN (100) 0.858 0.003 0.692 0.005 0.518 0.007 0.363 0.009 0.578 0.036
Meta-CoTGAN (100) 0.824 0.011 0.647 0.022 0.466 0.028 0.315 0.022 0.580 0.031
Meta-CoTGAN (100) 0.835 0.013 0.661 0.016 0.487 0.016 0.338 0.014 0.587 0.019
RelGAN (1000) 0.814 0.023 0.634 0.020 0.455 0.023 0.303 0.020 0.655 0.048
Meta-CoTGAN (1000) 0.842 0.011 0.675 0.019 0.502 0.026 0.349 0.024 0.583 0.028
Meta-CoTGAN (1000) 0.824 0.007 0.643 0.009 0.497 0.013 0.324 0.015 0.582 0.017
Meta-CoTGAN (1000) 0.817 0.021 0.638 0.027 0.465 0.025 0.319 0.018 0.589 0.022
Table 4: Ablation study result on COCO Image Captions dataset. We evaluate our proposed model when the cooperative training part and meta optimization have been turned off, respectively. Reported scores are derived from 6 random seeds.

To further validate this, we present the learning curves for the sample diversity metric and BLEU-5 as a representative sample quality metric in Figure 3. We could observe that the for RelGAN would fast go up, which is a sign of mode collapse. However, that for MetaCoTGAN progresses rather slowly. It shows that our proposed method could efficiently decelerate mode collapse and control the loss from explode. When investigating on the sample quality metric, we could observe the BLEU-5 score for RelGAN would go up faster than MetaCoTGAN. But eventually, our model could achieve a significantly higher standard than RelGAN. Also, we observe that when

for RelGAN explode (e.g., after 400 epochs), the repeat rate is rather high and therefore the generator just becomes useless. However, our method could preserve much better diversity. Also, we observe from the generated real sentences that our model could generate quite long sentences, while most of the GAN models that fall short 

[4].

Figure 3: We demonstrate the quality-diversity trade-off for our method as well as the baseline RelGAN on COCO Image Captions dataset. Our model progressively achieves better BLEU-5 score than RelGAN with an apparently slow progress for mode collapse. The BLEU-5 for RelGAN is plotted up to the point when its corresponding loss reaches its reported standard. Otherwise, the BLEU-5 score becomes no more meaningful since the model has turn into severe mode collapse (i.e., generating repeated sentences).

5.3 EMNLP2017 WMT News Dataset

Our third evaluation domain is the EMNLP2017 WMT News dataset. The size of this dataset is much larger than Image COCO, involving a training set of 270,000 sentences. The testing set consists of 10,000 sentences. The sentences have maximum length of 51. The vocabulary size is 5,255.

The results for EMNLP dataset are presented in Table 3. We can see that our proposed method consistently outperforms all baselines in terms of all the BLEU metrics and . Under the temperature setting of 100, our method outperforms the strong RelGAN baseline by on BLEU-4/BLEU-5. Noticeably, the best BLEU scores for our method are obtained when the loss is at a significantly lower level than RelGAN. This indicates that by conducting cooperative training, we could derive generator model with better sample quality and sample diversity

simultaneously. Moreover, it shows that our method could robustly perform well in rather challenging and diverse real-world datasets like EMNLP. Meanwhile, the performance of our method is quite robust, consistently outperforming RelGAN under both temperature settings, over all the evaluation metrics. By investigating through the generated real samples, we observe that the generated sentences convey rather diverse semantics and the output consists of considerably long sentences, unlike the conventional adversarial text generators that would shortly fall to the phase of generating short and repeated sentences.

5.4 Ablation Study

5.4.1 Impact of Cooperative Training Language Model

We demonstrate the impact of using an online updated language model to conduct our proposed cooperative training process. To this end, a direct comparison is to use a pretrained language model not updated with cooperative training. We denote such baseline as Meta-CoTGAN. We demonstrate the result on COCO Image Captions dataset in Table 4. We could observe that when online update to the language model is turned off, the model still preserve comparable sample diversity in terms of , since the cooperative training loss is still employed on the real data. However, under both temperature setting, the sample quality metrics could not perform as well as the full set of the proposed method. This shows that it is beneficial to update the language model jointly with the generator to let it offer a smoothly chanting target distribution to the generator.

5.4.2 Impact of Meta Optimization

We also evaluate the impact of the meta optimization setup. To this end, we compare our approach with a principled way of engaging the cooperative training loss for optimizing the generator parameters, which is proposed in the form of linearly summing up the adversarial loss and the cooperative training loss in a weighted manner, i.e., . We denote such baseline as Meta-CoTGAN. The results are shown in Table 4. Overall, Meta-CoTGAN obtain comparable scores for . However, its performance in terms of the sample quality metrics is still much inferior than using full set of solution. Thus, we could conclude that meta optimization is an important ingredient for balancing the quality-diversity trade-off. Intuitively, our proposed meta optimization set-up offers an efficient way to ensure the generator parameters after the adversarial update would decelerate from mode collapse, which is critical to derive the superior performance.

6 Conclusion and Discussion

We propose a meta cooperative training approach to facilitate the training of adversarial text generation models. Our method utilizes a cooperatively trained language model to effectively decelerate the mode collapse of adversarial training via distilling the prediction output distribution of the language model over the real data to the adversarial generator model. We evaluate our proposed method in both synthetic dataset and two real-world datasets with sequence length at a range from 7 to 51. As a result, our proposed method could consistently outperform the baseline algorithms on sample quality metrics and sample diversity metric simultaneously. Our proposed approach is general and could be promising to work with distinct RL-based or RL-free adversarial text generation algorithms as long as they face the issue of mode collapse. Our future work would be to apply meta cooperative training on more emerging RL-based/free GAN models.

References

  • [1] M. Al-Shedivat, T. Bansal, Y. Burda, I. Sutskever, I. Mordatch, and P. Abbeel (2018) Continuous adaptation via meta-learning in nonstationary and competitive environments. In ICLR, Cited by: §4.2.
  • [2] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein generative adversarial networks. In ICML, pp. 214–223. Cited by: §1, §1.
  • [3] D. Bahdanau, K. Cho, and Y. Bengio (2015) Neural machine translation by jointly learning to align and translate. In ICLR, Cited by: §1.
  • [4] M. Caccia, L. Caccia, W. Fedus, H. Larochelle, J. Pineau, and L. Charlin (2018) Language gans falling short. arXiv preprint arXiv:1811.02549. Cited by: §1, §2, §5.2.
  • [5] T. Che, Y. Li, R. Zhang, R. D. Hjelm, W. Li, Y. Song, and Y. Bengio (2017) Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983. Cited by: §1.
  • [6] L. Chen, S. Dai, C. Tao, H. Zhang, Z. Gan, D. Shen, Y. Zhang, G. Wang, R. Zhang, and L. Carin (2018) Adversarial text generation via feature-mover’s distance. In NeurIPS, pp. 4666–4677. Cited by: §1, §2.
  • [7] X. Chen, H. Fang, T. Lin, R. Vedantam, S. Gupta, P. Dollár, and C. L. Zitnick (2015) Microsoft coco captions: data collection and evaluation server. arXiv preprint arXiv:1504.00325. Cited by: §5.
  • [8] W. Fedus, I. Goodfellow, and A. M. Dai (2018) MaskGAN: better text generation via filling in the_. In ICLR, Cited by: §2.
  • [9] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, pp. 1126–1135. Cited by: §4.2.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NIPS, pp. 2672–2680. Cited by: §1.
  • [11] J. Guo, S. Lu, H. Cai, W. Zhang, Y. Yu, and J. Wang (2018) Long text generation via adversarial training with leaked information. In AAAI, pp. 5141–5148. Cited by: §2, §5.
  • [12] G. Hinton, O. Vinyals, and J. Dean (2015)

    Distilling the knowledge in a neural network

    .
    arXiv preprint arXiv:1503.02531. Cited by: §4.1.
  • [13] E. Jang, S. Gu, and B. Poole (2017) Categorical reparameterization with gumbel-softmax. In ICLR, Cited by: §3.
  • [14] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §5.
  • [15] M. J. Kusner and J. M. Hernández-Lobato (2016) Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051. Cited by: §2.
  • [16] A. M. Lamb, A. G. A. P. Goyal, Y. Zhang, S. Zhang, A. C. Courville, and Y. Bengio (2016) Professor forcing: a new algorithm for training recurrent networks. In NIPS, pp. 4601–4609. Cited by: §1.
  • [17] D. Li, Y. Yang, Y. Song, and T. M. Hospedales (2018) Learning to generalize: meta-learning for domain generalization. In AAAI, pp. 3490–3497. Cited by: §4.2.
  • [18] J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. Jurafsky (2016) Deep reinforcement learning for dialogue generation. In EMNLP, pp. 1192–1202. Cited by: §1.
  • [19] P. Li, W. Lam, L. Bing, and Z. Wang (2017) Deep recurrent generative decoder for abstractive text summarization. In EMNLP, pp. 2091–2100. Cited by: §1.
  • [20] X. Li, M. Sun, and P. Li (2019) Multi-agent discussion mechanism for natural language generation. In AAAI, pp. 6096–6103. Cited by: §1.
  • [21] K. Lin, D. Li, X. He, Z. Zhang, and M. Sun (2017) Adversarial ranking for language generation. In NIPS, pp. 3155–3165. Cited by: §2, §5.
  • [22] Y. Liu, S. Zhong, and W. Li (2012)

    Query-oriented multi-document summarization via unsupervised deep learning

    .
    In AAAI, Cited by: §1.
  • [23] S. Lu, L. Yu, W. Zhang, and Y. Yu (2019) CoT: cooperative training for generative modeling of discrete data. In ICML, pp. 4164–4172. Cited by: §2, §5, §5.
  • [24] S. Lu, Y. Zhu, W. Zhang, J. Wang, and Y. Yu (2018) Neural text generation: past, present and beyond. arXiv preprint arXiv:1803.07133. Cited by: §2.
  • [25] W. Nie, N. Narodytska, and A. Patel (2019) RelGAN: relational generative adversarial networks for text generation. In ICLR, Cited by: §2, §5, §5.
  • [26] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1.
  • [27] S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville (2017) Adversarial generation of natural language. arXiv preprint arXiv:1705.10929. Cited by: §1.
  • [28] A. A. Rusu, S. G. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, and R. Hadsell (2016) Policy distillation. In ICLR, Cited by: §4.1.
  • [29] M. Sun, X. Li, and P. Li (2018) Logician and orator: learning from the duality between language and knowledge in open domain. In EMNLP, pp. 2119–2130. Cited by: §1.
  • [30] I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to sequence learning with neural networks. In NIPS, pp. 3104–3112. Cited by: §1.
  • [31] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour (2000) Policy gradient methods for reinforcement learning with function approximation. In NIPS, pp. 1057–1063. Cited by: §2.
  • [32] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §1.
  • [33] T. Wen, M. Gasic, N. Mrksic, P. Su, D. Vandyke, and S. Young (2015) Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In EMNLP, pp. 1711–1721. Cited by: §1.
  • [34] R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §3.
  • [35] J. Xu, X. Ren, J. Lin, and X. Sun (2018) DP-gan: diversity-promoting generative adversarial network for generating informative and diversified text. arXiv preprint arXiv:1802.01345. Cited by: §2.
  • [36] H. Yin and S. J. Pan (2017) Knowledge transfer for deep reinforcement learning with hierarchical experience replay. In AAAI, pp. 1640–1646. Cited by: §4.1.
  • [37] L. Yu, W. Zhang, J. Wang, and Y. Yu (2017) Seqgan: sequence generative adversarial nets with policy gradient. In AAAI, pp. 2852–2858. Cited by: §2, §5, §5, §5.1.
  • [38] Y. Zhang, Z. Gan, K. Fan, Z. Chen, R. Henao, D. Shen, and L. Carin (2017) Adversarial feature matching for text generation. In ICML, pp. 4006–4015. Cited by: §2, §3.
  • [39] Y. Zhu, S. Lu, L. Zheng, J. Guo, W. Zhang, J. Wang, and Y. Yu (2018) Texygen: a benchmarking platform for text generation models. In SIGIR, pp. 1097–1100. Cited by: §5.2.