Adversarial Text Generation via Feature-Mover's Distance

09/17/2018 ∙ by Liqun Chen, et al. ∙ Duke University 0

Generative adversarial networks (GANs) have achieved significant success in generating real-valued data. However, the discrete nature of text hinders the application of GAN to text-generation tasks. Instead of using the standard GAN objective, we propose to improve text-generation GAN via a novel approach inspired by optimal transport. Specifically, we consider matching the latent feature distributions of real and synthetic sentences using a novel metric, termed the feature-mover's distance (FMD). This formulation leads to a highly discriminative critic and easy-to-optimize objective, overcoming the mode-collapsing and brittle-training problems in existing methods. Extensive experiments are conducted on a variety of tasks to evaluate the proposed model empirically, including unconditional text generation, style transfer from non-parallel text, and unsupervised cipher cracking. The proposed model yields superior performance, demonstrating wide applicability and effectiveness.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

FM-GAN

Tensorflow implementation for paper "Adversarial Text Generation via Feature-Mover’s Distance"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Natural language generation is an important building block in many applications, such as machine translation [5], dialogue generation [36], and image captioning [14]. While these applications demonstrate the practical value of generating coherent and meaningful sentences in a supervised setup, unsupervised

text generation, which aims to estimate the distribution of real text from a corpus, is still challenging. Previous approaches, that often maximize the log-likelihood of each ground-truth word given prior observed words 

[41], typically suffer from exposure bias [6, 47], i.e., the discrepancy between training and inference stages. During inference, each word is generated in sequence based on previously generated words, while during training ground-truth words are used for each timestep [27, 53, 58].

Recently, adversarial training has emerged as a powerful paradigm to address the aforementioned issues. The generative adversarial network (GAN) [21] matches the distribution of synthetic and real data by introducing a two-player adversarial game between a generator and a discriminator. The generator is trained to learn a nonlinear function that maps samples from a given (simple) prior distribution to synthetic data that appear realistic, while the discriminator aims to distinguish the fake data from real samples. GAN can be trained efficiently via back-propagation through the nonlinear function of the generator, which typically requires the data to be continuous (e.g.

, images). However, the discrete nature of text renders the model non-differentiable, hindering use of GAN in natural language processing tasks.

Attempts have been made to overcome such difficulties, which can be roughly divided into two categories. The first includes models that combine ideas from GAN and reinforcement learning (RL), framing text generation as a sequential decision-making process. Specifically, the gradient of the generator is estimated via the policy-gradient algorithm. Prominent examples from this category include SeqGAN 

[60], MaliGAN [8], RankGAN [37], LeakGAN [24] and MaskGAN [15]

. Despite the promising performance of these approaches, one major disadvantage with such RL-based strategies is that they typically yield high-variance gradient estimates, known to be challenging for optimization 

[40, 61].

Models from the second category adopt the original framework of GAN without incorporating the RL methods (i.e., RL-free). Distinct from RL-based approaches, TextGAN [61] and Gumbel-Softmax GAN (GSGAN) [31] apply a simple soft-argmax operator, and a similar Gumbel-softmax trick [28, 40], respectively, to provide a continuous approximation of the discrete distribution (i.e., multinomial) on text, so that the model is still end-to-end differentiable. What makes this approach appealing is that it feeds the optimizer with low-variance gradients, improving stability and speed of training. In this work, we aim to improve the training of GAN that resides in this category.

When training GAN to generate text samples, one practical challenge is that the gradient from the discriminator often vanishes after being trained for only a few iterations. That is, the discriminator can easily distinguish the fake sentences from the real ones. TextGAN [61] proposed a remedy based on feature matching [49], adding Maximum Mean Discrepancy (MMD) to the original objective of GAN [22]. However, in practice, the model is still difficult to train. Specifically, (i) the bandwidth of the RBF kernel is difficult to choose; (ii) kernel methods often suffer from poor scaling; and (iii) empirically, TextGAN tends to generate short sentences.

In this work, we present feature mover GAN (FM-GAN), a novel adversarial approach that leverages optimal transport (OT) to construct a new model for text generation. Specifically, OT considers the problem of optimally transporting one set of data points to another, and is closely related to GAN. The earth-mover’s distance (EMD) is employed often as a metric for the OT problem. In our setting, a variant of the EMD between the feature distributions of real and synthetic sentences is proposed as the new objective, denoted as the feature-mover’s distance (FMD). In this adversarial game, the discriminator aims to maximize the dissimilarity of the feature distributions based on the FMD, while the generator is trained to minimize the FMD by synthesizing more-realistic text. In practice, the FMD is turned into a differentiable quantity and can be computed using the proximal point method [59].

The main contributions of this paper are as follows: (i) A new GAN model based on optimal transport is proposed for text generation. The proposed model is RL-free, and uses a so-called feature-mover’s distance as the objective. (ii) We evaluate our model comprehensively on unconditional text generation. When compared with previous methods, our model shows a substantial improvement in terms of generation quality based on the BLEU statistics [43] and human evaluation. Further, our model also achieves good generation diversity based on the self-BLEU statistics [63]. (iii) In order to demonstrate the versatility of the proposed method, we also generalize our model to conditional-generation tasks, including non-parallel text style transfer [54], and unsupervised cipher cracking [20].

2 Background

2.1 Adversarial training for distribution matching

We review the basic idea of adversarial distribution matching (ADM), which avoids the specification of a likelihood function. Instead, this strategy defines draws from the synthetic data distribution by drawing a latent code from an easily sampled distribution , and learning a generator function such that . The form of is neither specified nor learned, rather we learn to draw samples from . To match the ensemble of draws from with an ensemble of draws from the real data distribution , ADM introduces a variational function , where is known as the critic function or discriminator. The goal of ADM is to obtain an equilibrium of the following objective:

(1)

where is computed using samples from and (not explicitly in terms of the distributions themselves), and defines a discrepancy metric between two distributions [3, 42]. One popular example of ADM is the generative adversarial network (GAN), in which recovers the Jensen-Shannon divergence (JSD) for  [21]; expectations and are computed approximately with samples from the respective distributions. Most of the existing work in applying GAN for text generation also uses this standard form, by combining it with policy gradient [60]. However, it has been shown in [2] that this standard GAN objective suffers from an unstably weak learning signal when the discriminator gets close to local optimal, due to the gradient-vanishing effect. This is because the JSD implied by the original GAN loss is not continuous wrt the generator parameters.

[width = 14cm]fig/model_frame.pdf

Figure 1: Illustration of the proposed feature mover GAN (FM-GAN) for text generation.

2.2 Sentence to feature

GAN models were originally developed for learning to draw from a continuous distribution. The discrete nature of text samples hinders the use of GANs, and thus a vectorization of a sequence of discrete tokens is considered. Let

be a sentence of length , where denotes the one-hot representation for the -th word. A word-level vector representation of each word in is achieved by learning a word embedding matrix , where is the size of the vocabulary. Each word is represented as a -dimensional vector . The sentence is now represented as a matrix

. A neural network

, such as RNN [5, 10], CNN [29, 18, 52] or SWEM [51], can then be applied to extract feature vector .

2.3 Optimal transport

GAN can be interpreted in the framework of optimal transport theory, and it has been shown that the Earth-Mover’s Distance (EMD) is a good objective for generative modeling [3]

. Originally applied in content-based image retrieval tasks 

[48], EMD is well-known for comparing multidimensional distributions that are used to describe the different features of images (, brightness, color, and texture content). It is defined as the ground distance (i.e.

, cost function) between every two perceptual features, extending the notion of a distance between single elements to a distance between sets of elements. Specifically, consider two probability distribution

and ; EMD can be then defined as:

(2)

where

denotes the set of all joint distributions

with marginals and , and is the cost function (e.g., Euclidean or cosine distance). Intuitively, EMD is the minimum cost that has to transport from to .

3 Feature Mover GAN

We propose a new GAN framework for discrete text data, called feature mover GAN (FM-GAN). The idea of optimal transport (OT) is integrated into adversarial distribution matching. Explicitly, the original critic function in GANs is replaced by the Earth-Mover’s Distance (EMD) between the sentence features of real and synthetic data. In addition, to handle the intractable issue when computing (2[3, 49], we define the Feature-Mover’s Distance (FMD), a variant of EMD that can be solved tractably using the Inexact Proximal point method for OT (IPOT) algorithm [59]. In the following sections, we discuss the main objective of our model, the detailed training process for text generation, as well as extensions. Illustration of the framework is shown in Figure 1.

3.1 Feature-mover’s distance

In practice, it is not tractable to calculate the minimization over in (2[3, 19, 50]. In this section, we propose the Feature-Mover’s Distance (FMD) which can be solved tractably. Consider two sets of sentence feature vectors and drawn from two different sentence feature distributions and ; and are the total number of -dimensional sentence features in and , respectively. Let be a transport matrix in which defines how much of feature vector would be transformed to . The FMD between two sets of sentence features is then defined as:

(3)

where and are the constraints, and represents the Frobenius dot-product. In this work, the transport cost is defined as the cosine distance: , and is the cost matrix such that . Note that during training, we set as the mini-batch size.

We propose to use the Inexact Proximal point method for Optimal Transport (IPOT) algorithm to compute the optimal transport matrix , which provides a solution to the original optimal transport problem (3[59]. Specifically, IPOT iteratively solves the following optimization problem:

(4)

where denotes the Bregman divergence wrt the entropy functional .

1:Input: batch size , ,,
2:,
3:,
4:for  do
5:       // is Hadamard product
6:      for  do
7:            ,
8:      end for
9:      
10:end for
Algorithm 1 IPOT algorithm [59]

Here the Bregman divergence serves as a proximity metric and is the proximity penalty. This problem can be solved efficiently by Sinkhorn-style proximal point iterations [13, 59], as detailed in Algorithm 1.

Notably, unlike the Sinkhorn algorithm [19], we do not need to back-propagate the gradient through the proximal point iterations, which is justified by the Envelope Theorem [1] (see the Supplementary Material (SM)). This accelerates the learning process significantly and improves training stability [59].

1:Input: batch size , dataset , learning rate , maximum number of iterations .
2:for  do
3:     for  do
4:         Sample a mini-batch of and ;
5:         Extract sentence features and ;
6:         Update the feature extractor by maximizing:
7:     end for
8:     Repeat Step and ;
9:     Update the generator by minimizing:
10:end for
Algorithm 2 Adversarial text generation via FMD.

3.2 Adversarial distribution matching with FMD

To integrate FMD into adversarial distribution matching, we propose to solve the following mini-max game:

(5)

where is the sentence feature extractor, and is the sentence generator. We call this feature mover GAN (FM-GAN). The detailed training procedure is provided in Algorithm 2.

Sentence generator 

The Long Short-Term Memory (LSTM) recurrent neural network 

[25] is used as our sentence generator parameterized by . Let be our learned word embedding matrix, where is the vocabulary size, with each word in sentence embedded into , a -dimensional word vector. All words in the synthetic sentence are generated sequentially, i.e.,

(6)

where is the hidden unit updated recursively through the LSTM cell: , is a decoding matrix, defines the distribution over the vocabulary. Note that, distinct from a traditional sentence generator, here, the argmax operation is used, rather than sampling from a multinomial distribution, as in the standard LSTM. Therefore, all randomness during the generation is clamped into the noise vector .

The generator cannot be trained, due to the non-differentiable function argmax. Instead, an soft-argmax operator [61] is used as a continuous approximation:

(7)

where is the temperature parameter. Note when , this approximates (6). We denote as the approximated embedding matrix for the synthetic sentence.

Feature extractor 

We use the convolutional neural network proposed in

[11, 29] as our sentence feature extractor parameterized by

, which contains a convolution layer and a max-pooling layer. Assuming a sentence of length

, the sentence is represented as a matrix , where is the word-embedding dimension, and is the maximum sentence length. A convolution filter is applied to a window of

words to produce new features. After applying the nonlinear activation function, we then use the max-over-time pooling operation 

[11] to the feature maps and extract the maximum values. While the convolution operator can extract features independent of their positions in the sentence, the max-pooling operator tries to capture the most salient features.

The above procedure describes how to extract features using one filter. Our model uses multiple filters with different window sizes, where each filter is considered as a linguistic feature detector. Assume different window sizes, and for each window size we have filters; then a sentence feature vector can be represent as , where .

3.3 Extensions to conditional text generation tasks

Style transfer   Our FM-GAN model can be readily generalized to conditional generation tasks, such as text style transfer [26, 54, 44, 35]. The style transfer task is essentially learning the conditional distribution and , where and represent the labels for different styles, with and sentences in different styles. Assuming and are conditionally independent given the latent code , we have:

(8)

Equation (8

) suggests an autoencoder can be applied for this task. From this perspective, we can apply our optimal transport method in the cross-aligned autoencoder 

[54], by replacing the standard GAN loss with our FMD critic. We follow the same idea as [54] to build the style transfer framework. is our encoder that infers the content from given style and sentence ; is our decoder that generates synthetic sentence , given content and style . We add the following reconstruction loss for the autoencoder:

(9)

where and are the empirical data distribution for each style. We also need to implement adversarial training on the generator with discrete data. First, we use the soft-argmax approximation discussed in Section 3.2; second, we also use Professor-Forcing [32] algorithm to match the sequence of LSTM hidden states. That is, the discriminator is designed to discriminate with real sentence . Unlike [54] which uses two discriminators, our model only needs to apply the FMD critic twice to match the distributions for two different styles:

(10)

where is the learned word embedding matrix. The final objective function for this task is: , where

is a hyperparameter that balances these two terms.

Unsupervised decipher   Our model can also be used to tackle the task of unsupervised cipher cracking by using the framework of CycleGAN [62]. In this task, we have two different corpora, i.e., denotes the original sentences, and denotes the encrypted corpus using some cipher code, which is unknown to our model. Our goal is to design two generators that can map one corpus to the other, i.e., , . Unlike the style-transfer task, we define and as two sentence feature extractors for the different corpora. Here we denote to be the empirical distribution of the original corpus, and to be the distribution of the encrypted corpus. Following [20], we design two losses: the cycle-consistency loss (reconstruction loss) and the adversarial feature matching loss. The cycle-consistency loss is defined on the feature space as:

(11)

where denotes the -norm, and is the word embedding matrix. The adversarial loss aims to help match the generated samples with the target:

(12)

The final objective function for the decipher task is: , where is a hyperparameter that balances the two terms.

4 Related work

GAN for text generation   SeqGAN [60], MaliGAN [8], RankGAN [37], and MaskGAN [15] use reinforcement learning (RL) algorithms for text generation. The idea behind all these works are similar: they use the REINFORCE algorithm to get an unbiased gradient estimator for the generator, and apply the roll-out policy to obtain the reward from the discriminator. LeakGAN [24] adopts a hierarchical RL framework to improve text generation. However, it is slow to train due to its complex design. For GANs in the RL-free category, GSGAN [31] and TextGAN [61]

use the Gumbel-softmax and soft-argmax trick, respectively, to deal with discrete data. While the latter uses MMD to match the features of real and synthetic sentences, both models still keep the original GAN loss function, which may result in the gradient-vanishing issue of the discriminator.

GAN with OT  Wasserstein GAN (WGAN) [3, 23] applies the EMD by imposing the constraint on the discriminator, which alleviates the gradient-vanishing issue when dealing with continuous data (i.e., images). However, for discrete data (i.e., text), the gradient still vanishes after a few iterations, even when weight-clipping or the gradient-penalty is applied on the discriminator [20]. Instead, the Sinkhorn divergence generative model (Sinkhorn-GM) [19] and Optimal transport GAN (OT-GAN) [50] optimize the Sinkhorn divergence [13], defined as an entropy regularized EMD (2): , where is the entropy term, and is the hyperparameter. While the Sinkhorn algorithm [13] is proposed to solve this entropy regularized EMD, the solution is sensitive to the value of the hyperparameter , leading to a trade-off between computational efficiency and training stability. Distinct from that, our method uses IPOT to tackle the original problem of OT. In practice, IPOT is more efficient than the Sinkhorn algorithm, and the hyperparameter in (4) only affects the convergence rate [59].

5 Experiment

We apply the proposed model to three application scenarios: generic (unconditional) sentence generation, conditional sentence generation (with pre-specified sentiment), and unsupervised decipher. For the generic sentence generation task, we experiment with three standard benchmarks: CUB captions [57], MS COCO captions [38], and EMNLP2017 WMT News [24].

Since the sentences in the CUB dataset are typically short and have similar structure, it is employed as our toy evaluation. For the second dataset, we sample sentences from the original MS COCO captions. Note that we do not remove any low-frequency words for the first two datasets, in order to evaluate the models in the case with a relatively large vocabulary size. The third dataset is a large long-text collection from EMNLP2017 WMT News Dataset. To facilitate comparison with baseline methods, we follow the same data preprocessing procedures as in [24]. The summary statistics of all the datasets are presented in Table 1.

Dataset Train Test Vocabulary average length
CUB captions 100,000 10,000 4,391 15
MS COCO captions 120,000 10,000 27,842 11
EMNLP2017 WMT News 278,686 10,000 5,728 28
Table 1: Summary statistics for the datasets used in the generic text generation experiments.

For conditional text generation, we consider the task of transferring an original sentence to the opposite sentiment, in the case where parallel (paired) data are not available. We use the same data as introduced in [54]. For the unsupervised decipher task, we follow the experimental setup in CipherGAN [20] and evaluate the model improvement after replacing the critic with the proposed FMD objective.

We employ test-BLEU score [60], self-BLEU score [63]

, and human evaluation as the evaluation metrics for the generic sentence generation task. To ensure fair comparison, we perform extensive comparisons with several strong baseline models using the benchmark tool in Texygen 

[63]. For the non-parallel text style transfer experiment, following [26, 54]

, we use a pretrained classifier to calculate the sentiment accuracy of transferred sentences. We also leverage human evaluation to further measure the quality of the transferring results. For the deciphering experiment, we adopt the average proportion of correctly mapped words as accuracy as proposed in

[20]. Our code will be released to encourage future research.

Figure 2: Test-BLEU score (higher value implies better quality) vs self-BLEU score (lower value implies better diversity). Upper panel is BLEU-3 and lower panel is BLEU-4.

5.1 Generic text generation

In general, when evaluating the performance of different models, we desire high test-BLEU score (good quality) and low self-BLEU score (high diversity). Both scores should be considered: (i) a high test-BLEU score together with a high self-BLEU score means that the model might generate good sentences while suffering from mode collapse (i.e., low diversity); (ii) if a model generates sentences randomly, the diversity of generated sentence could be high but the test-BLEU score would be low. Figure 2 is used to compare the performance of every model. For each subplot, the -axis represents test-BLEU, and the -axis represents self-BLEU (here we only show BLEU-3 and BLEU-4 figures; more quantitative results can be found in the SM). For the CUB and MS COCO datasets, our model achieves both high test-BLEU and low self-BLEU, providing realistic sentences with high diversity. For the EMNLP WMT dataset, the synthetic sentences from SeqGAN, RankGAN, GSGAN and TextGAN is less coherent and realistic (examples can be found in the SM) due to the long-text nature of the dataset. In comparison, our model is still capable of providing realistic results.

Method MLE SeqGAN RankGAN LeakGAN
Human score 2.54 0.79 2.55 0.83 2.86 0.95 3.410.82
Method GSGAN TextGAN Our model real sentences
Human score 2.520.78 3.030.92 3.720.80 4.210.77
Table 2: Human evaluation results on EMNLP WMT.

To further evaluate the generation quality based on the EMNLP WMT dataset, we conduct a human Turing test on Amazon Mechanical Turk; 10 judges are asked to rate over 100 randomly sampled sentences from each model with a scale from 0 to 5. The means and standard deviations of the rating score are calculated and provided in Table 

2. We also provide some examples of the generated sentences from LeakGAN and our model in Table 3. More generated sentences are provided in the SM.

LeakGAN: (1) " people , if aleppo recognised switzerland stability , " mr . trump has said that " " it has been
filled before the courts .
(2) the russian military , meanwhile previously infected orders , but it has already been done
on the lead of the attack .
Ours: (1) this is why we will see the next few years , we ’ re looking forward to the top of the world ,
which is how we ’ re in the future .
(2) If you ’ re talking about the information about the public , which is not available , they have
to see a new study .
Table 3: Examples of generated sentences from LeakGAN and our model.

5.2 Non-parallel text style transfer

Table 4 presents the sentiment transfer results on the Yelp review dataset, which is evaluated with the accuracy of transferred sentences, determined by a pretrained CNN classifier [29]. Note that with the same experimental setup as in [54], our model achieves significantly higher transferring accuracy compared with the cross-aligned autoencoder (CAE) model [54]. Moreover, our model even outperforms the controllable text generation method [26] and BST [44]

, where a sentiment classifier is directly pre-trained to guide the sentence generation process (on the contrary, our model is trained in an end-to-end manner and requires no pre-training steps), and thus should potentially have a better control over the style (

i.e., sentiment) of generated sentences [54]. The superior performance of the proposed method highlights the ability of FMD to mitigate the vanishing-gradient issue caused by the discrete nature of text samples, and give rises to better matching between the distributions of reviews belonging to two different sentiments.

Method Controllable [26] CAE [54] BST [44] Our model
Accuracy(%) 84.5 80.6 87.2 89.8
Sentiment 3.6 3.2 - 4.1
Content 4.6 4.1 - 4.5
Fluency 4.2 3.7 - 4.4
Table 4: Sentiment transfer accuracy and human evaluation results on Yelp.

Human evaluations are conducted to assess the quality of the transferred sentences. In this regard, we randomly sample 100 sentences from the test set, and 5 volunteers rate the outputs of different models in terms of their fluency, sentiment, and content preservation in a double blind fashion. The rating score is from 0 to 5. Detailed results are shown in Table 4. We also provide sentiment transfer examples in Table 5. More examples are provided in the SM.

Original: one of the best gourmet store shopping experiences i have ever had .
Controllable : one of the best gourmet store shopping experiences i have ever had .
CAE: one of the worst staff i would ever ever ever had ever had .
Ours: one of the worst indian shopping store experiences i have ever had .
Original: staff behind the deli counter were super nice and efficient !
Controllable: staff behind the deli counter were super rude and efficient !
CAE: the staff were the front desk and were extremely rude airport !
Ours: staff behind the deli counter were super nice and inefficient !
Table 5: Sentiment transfer examples.

5.3 Unsupervised decipher

CipherGAN [20] uses GANs to tackle the task of unsupervised cipher cracking, utilizing the framework of CycleGAN [62] and adopting techniques such as Gumbel-softmax [31] that deal with discrete data. The implication of unsupervised deciphering could be understood as unsupervised machine translation, in which one language might be treated as an enciphering of the other. In this experiment, we adapt the idea of feature mover’s distance to the original framework of CipherGAN and test this modified model on the Brown English text dataset [16].

The Brown English-language corpus [30] has a vocabulary size of over one million. In this experiment, only the top most frequent words are considered while the others are replaced by an “unknown” token. We denote this modified word-level dataset as Brown-W200. We use Vigenère [7] to encipher the original plain text. This dataset can be downloaded from this repository111https://github.com/for-ai/CipherGAN.

For fair comparison, all the model architectures and parameters are kept the same as CipherGAN while the critic for the discriminator is replaced by the FMD objective as shown in (3). Table 6 shows the quantitative results in terms of average proportion of words mapped in a given sequence (i.e., deciphering accuracy). The baseline frequency analysis model only operates when the cipher key is known. Our model achieves higher accuracy compared to the original CipherGAN. Note that some other experimental setups from [20] are not evaluated, due to the extremely high accuracy (above ); the amount of improvement would not be apparent.

Method Freq. Analysis (with keys) CipherGAN [20] Our model
Accuracy(%) < 0.1 (44.3) 75.7 77.2
Table 6: Decipher results on Brown-W200.

6 Conclusion

We introduce a novel approach for text generation using feature-mover’s distance (FMD), called feature mover GAN (FM-GAN). By applying our model to several tasks, we demonstrate that it delivers good performance compared to existing text generation approaches. For future work, FM-GAN has the potential to be applied on other tasks such as image captioning [56], joint distribution matching [17, 46, 9, 34, 55, 45], unsupervised sequence classification [39], and unsupervised machine translation [4, 12, 33].

Acknowledgments

This research was supported in part by DARPA, DOE, NIH, ONR and NSF.

References

  • [1] S. Afriat. Theory of maxima and the method of lagrange. SIAM Journal on Applied Mathematics, 1971.
  • [2] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017.
  • [3] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. In ICML, 2017.
  • [4] M. Artetxe, G. Labaka, E. Agirre, and K. Cho.

    Unsupervised neural machine translation.

    In ICLR, 2018.
  • [5] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
  • [6] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In NIPS, 2015.
  • [7] A. A. Bruen and M. A. Forcinito. Cryptography, information theory, and error-correction: a handbook for the 21st century, volume 68. John Wiley & Sons, 2011.
  • [8] T. Che, Y. Li, R. Zhang, R. D. Hjelm, W. Li, Y. Song, and Y. Bengio. Maximum-likelihood augmented discrete generative adversarial networks. In arXiv:1702.07983, 2017.
  • [9] L. Chen, S. Dai, Y. Pu, E. Zhou, C. Li, Q. Su, C. Chen, and L. Carin. Symmetric variational autoencoder and connections to adversarial learning. In AISTATS, 2018.
  • [10] K. Cho, B. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, 2014.
  • [11] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. JMLR, 2011.
  • [12] A. Conneau, G. Lample, M. Ranzato, L. Denoyer, and H. Jégou. Word translation without parallel data. arXiv preprint arXiv:1710.04087, 2017.
  • [13] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS, 2013.
  • [14] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Dollár, and J. Gao. From captions to visual concepts and back. In CVPR, 2015.
  • [15] W. Fedus, I. Goodfellow, and A. M. Dai. MaskGAN: Better text generation via filling in the _. ICLR, 2018.
  • [16] W. N. Francis. Brown corpus manual. http://icame. uib. no/brown/bcm. html, 1979.
  • [17] Z. Gan, L. Chen, W. Wang, Y. Pu, Y. Zhang, H. Liu, C. Li, and L. Carin. Triangle generative adversarial networks. In NIPS, 2017.
  • [18] Z. Gan, Y. Pu, R. Henao, C. Li, X. He, and L. Carin. Learning generic sentence representations using convolutional neural networks. In EMNLP, 2017.
  • [19] A. Genevay, G. Peyré, and M. Cuturi. Learning generative models with sinkhorn divergences. In AISTATS, 2018.
  • [20] A. N. Gomez, S. Huang, I. Zhang, B. M. Li, M. Osama, and L. Kaiser. Unsupervised cipher cracking using discrete gans. In arXiv:1801.04883, 2018.
  • [21] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
  • [22] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test. JMLR, 2012.
  • [23] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of Wasserstein GANs. In NIPS, 2017.
  • [24] J. Guo, S. Lu, H. Cai, W. Zhang, Y. Yu, and J. Wang. Long text generation via adversarial training with leaked information. In AAAI, 2018.
  • [25] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 1997.
  • [26] Z. Hu, Z. Yang, X. Liang, R. Salakhutdinov, and E. P. Xing. Toward controlled generation of text. In ICML, 2017.
  • [27] F. Huszár. How (not) to train your generative model: Scheduled sampling, likelihood, adversary? In arXiv:1511.05101, 2015.
  • [28] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with Gumbel-softmax. In ICLR, 2017.
  • [29] Y. Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014.
  • [30] H. Kucera and W. Francis. A standard corpus of present-day edited american english, for use with digital computers (revised and amplified from 1967 version), 1979.
  • [31] M. J. Kusner and J. M. Hernández-Lobato. GANS for sequences of discrete elements with the Gumbel-softmax distribution. In arXiv:1611.04051, 2016.
  • [32] A. Lamb, V. Dumoulin, and A. Courville. Discriminative regularization for generative models. In arXiv:1602.03220, 2016.
  • [33] G. Lample, M. Ott, A. Conneau, L. Denoyer, and M. Ranzato. Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755, 2018.
  • [34] C. Li, H. Liu, C. Chen, Y. Pu, L. Chen, R. Henao, and L. Carin. Alice: Towards understanding adversarial learning for joint distribution matching. In NIPS, 2017.
  • [35] J. Li, R. Jia, H. He, and P. Liang. Delete, retrieve, generate: A simple approach to sentiment and style transfer. In NAACL, 2018.
  • [36] J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. Adversarial learning for neural dialogue generation. In EMNLP, 2017.
  • [37] K. Lin, D. Li, X. He, Z. Zhang, and M.-T. Sun. Adversarial ranking for language generation. In NIPS, 2017.
  • [38] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.
  • [39] Y. Liu, J. Chen, and L. Deng. An unsupervised learning method exploiting sequential output statistics. In arXiv:1702.07817, 2017.
  • [40] C. J. Maddison, A. Mnih, and Y. W. Teh.

    The concrete distribution: A continuous relaxation of discrete random variables.

    In ICLR, 2017.
  • [41] T. Mikolov, M. Karafiát, L. Burget, J. Černocký, and S. Khudanpur. Recurrent neural network based language model. In ISCA, 2010.
  • [42] S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In NIPS, 2016.
  • [43] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: a method for automatic evaluation of machine translation. In ACL, 2002.
  • [44] S. Prabhumoye, Y. Tsvetkov, R. Salakhutdinov, and A. W. Black. Style transfer through back-translation. In ACL, 2018.
  • [45] Y. Pu, S. Dai, Z. Gan, W. Wang, G. Wang, Y. Zhang, R. Henao, and L. Carin. Jointgan: Multi-domain joint distribution learning with generative adversarial nets. In ICML, 2018.
  • [46] Y. Pu, W. Wang, R. Henao, L. Chen, Z. Gan, C. Li, and L. Carin. Adversarial symmetric variational autoencoder. In NIPS, 2017.
  • [47] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural networks. In ICLR, 2016.
  • [48] Y. Rubner, C. Tomasi, and L. J. Guibas. A metric for distributions with applications to image databases. In ICCV, 1998.
  • [49] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training GANs. In NIPS, 2016.
  • [50] T. Salimans, H. Zhang, A. Radford, and D. Metaxas. Improving GANs using optimal transport. In ICLR, 2018.
  • [51] D. Shen, G. Wang, W. Wang, M. R. Min, Q. Su, Y. Zhang, C. Li, R. Henao, and L. Carin. Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms. In ACL, 2018.
  • [52] D. Shen, Y. Zhang, R. Henao, Q. Su, and L. Carin. Deconvolutional latent-variable model for text sequence matching. In AAAI, 2018.
  • [53] S. Shen, Y. Cheng, Z. He, W. He, H. Wu, M. Sun, and Y. Liu. Minimum risk training for neural machine translation. In ACL, 2015.
  • [54] T. Shen, T. Lei, R. Barzilay, and T. Jaakkola. Style transfer from non-parallel text by cross-alignment. In NIPS, 2017.
  • [55] C. Tao, L. Chen, R. Henao, J. Feng, and L. Carin. Chi-square generative adversarial network. In ICML, 2018.
  • [56] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2015.
  • [57] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
  • [58] S. Wiseman and A. M. Rush. Sequence-to-sequence learning as beam-search optimization. In EMNLP, 2016.
  • [59] Y. Xie, X. Wang, R. Wang, and H. Zha. A fast proximal point method for Wasserstein distance. In arXiv:1802.04307, 2018.
  • [60] L. Yu, W. Zhang, J. Wang, and Y. Yu. SeqGAN: Sequence generative adversarial nets with policy gradient. In AAAI, 2017.
  • [61] Y. Zhang, Z. Gan, K. Fan, Z. Chen, R. Henao, D. Shen, and L. Carin. Adversarial feature matching for text generation. In ICML, 2017.
  • [62] J. Zhu, T. Park, P. Isola, and A. Efros.

    Unpaired image-to-image translation using cycle-consistent adversarial networks.

    In ICCV, 2017.
  • [63] Y. Zhu, S. Lu, L. Zheng, J. Guo, W. Zhang, J. Wang, and Y. Yu. Texygen: A benchmarking platform for text generation models. In SIGIR, 2018.

Appendix A Proof

In this section, we use Envelope theorem to prove that the gradient for the transport matrix is in our algorithm.

Theorem 1

Envelope theorem [1] Let and be real-valued continuously functions, where , and are the parameters. We assume is the optimal solution of with fixed and constraint , i.e.,

Then assume function is also continuous and differentiable, defined as the derivative of over is:

Assume the parameters in is , and the parameters in is . Using Envelope theorem, the gradient respect to is:

(13)

Similarly, the gradient respect to is:

(14)

Eqn. (A) and (14) show the derivative over the flow matrix is not computed.

Appendix B Additional experimental results

b.1 Quantitative results

The detailed quantitative result is shown in Table 8, 8, 10, 10, 12, 12.

Method MLE SeqGAN RankGAN LeakGAN GSGAN textGAN Our model
BLEU-2 0.962 0.964 0.966 0.987 0.911 0.917 0.982
BLEU-3 0.911 0.910 0.905 0.971 0.840 0.849 0.941
BLEU-4 0.823 0.822 0.831 0.924 0.758 0.745 0.852
BLEU-5 0.703 0.707 0.709 0.854 0.624 0.661 0.729
Table 8: Self BLEU results on CUB.
Method MLE SeqGAN RankGAN LeakGAN GSGAN textGAN Our model
BLEU-2 0.927 0.944 0.936 0.978 0.954 0.941 0.945
BLEU-3 0.823 0.850 0.845 0.946 0.867 0.849 0.846
BLEU-4 0.676 0.722 0.701 0.892 0.726 0.699 0.684
Table 7: Test BLEU results on CUB.
Method MLE SeqGAN RankGAN GSGAN LeakGAN VAE textGAN Our model
BLEU-2 0.820 0.820 0.852 0.810 0.922 0.926 0.910 0.942
BLEU-3 0.607 0.604 0.637 0.566 0.797 0.774 0.728 0.812
BLEU-4 0.389 0.361 0.389 0.335 0.602 0.552 0.484 0.618
BLEU-5 0.248 0.211 0.248 0.197 0.416 0.362 0.306 0.414
Table 10: Self BLEU results on MS COCO.
Method MLE SeqGAN RankGAN GSGAN LeakGAN VAE textGAN Our model
BLEU-2 0.754 0.807 0.822 0.785 0.912 0.830 0.806 0.831
BLEU-3 0.511 0.577 0.592 0.522 0.825 0.597 0.548 0.632
BLEU-4 0.232 0.278 0.288 0.230 0.689 0.284 0.217 0.325
Table 9: Test BLEU results on MS COCO.
Method MLE SeqGAN RankGAN LeakGAN GSGAN textGAN Our model
BLEU-2 0.761 0.630 0.774 0.920 0.723 0.777 0.932
BLEU-3 0.468 0.354 0.484 0.725 0.440 0.529 0.771
BLEU-4 0.231 0.164 0.249 0.502 0.210 0.305 0.552
BLEU-5 0.116 0.087 0.131 0.321 0.107 0.161 0.399
Table 12: Self BLEU results on EMNLP WMT.
Method MLE SeqGAN RankGAN LeakGAN GSGAN textGAN Our model
BLEU-2 0.664 0.728 0.672 0.857 0.682 0.806 0.831
BLEU-3 0.337 0.411 0.346 0.696 0.410 0.548 0.682
BLEU-4 0.113 0.139 0.118 0.373 0.231 0.287 0.385
Table 11: Test BLEU results on EMNLP WMT.
Original: the new yorker was amazing .
CAE: the new off was not funny .
Ours: the new yorker was horrible either .
Original: it was beautiful and lined with lady fingers to cover sides .
CAE: it was impossible and it were impossible to get with _num_ days
Ours: it was beautiful and lady with someone to fix table to deliver .
Original: the subs are so delicious .
CAE: the bathrooms are just so bland .
Ours: the subs are so bland .
Original: pasta , sandwiches , and desserts .
CAE: _num_ , and salsa , and wings .
Ours: pasta , salad , sandwiches , and desserts .
Original: beautiful building and a memorable experience .
CAE: clean and a disaster experience experience experience .
Ours: beautiful building and a fluke experience memorable .
Original: my experience was horrible .
CAE: my experience was great !
Ours: my experience was amazing .
Original: an employee could not find it , and his manager could not find it .
CAE: it ’s a new place and it , and it ’s my dog loves it !
Ours: my employee could not find it , and his manager could not find it .
Original: the place was dirty , way crowded with crap products .
CAE: the place was clean , clean with clean .
Ours: the place is clean , kind with crap products .
Original: the worst place you can go to .
CAE: the best place to go .
Ours: the best place you can go .
Original: this place is a shit hole the management is nonexistent after _num_ o’clock .
CAE: this place is a clean place is the best number are occupied .
Ours: this place is a cool hole the staff is after _num_ o’clock .
Original: i love the food … however service here is horrible .
CAE: i love the service here is awesome service .
Ours: i love the food service is great here .
Table 13: Sentiment transfer examples.

b.2 Qualitative results

The samples of sentiment transfer can be found in Table 13. The samples of text generation of different models can be found in Table 14, 15, 16.

-26mm TextGAN: - this bird is brown in color - a small bird with a light brown head short feet and small bird - this bird has curved thighs and a white belly - the bird has a red eyebrow a white and breast with a white breast and striped primaries - this bird is brown - this particular bird is a very large - this bird has wings that are black - this is a bird has a stubby and body a white and black bird with white is white and black and black is spotted in color - a medium sized bird has a yellow belly and grey breasts - this bird has brown wings the coverts are white and brown - this bird has wings that are - this bird has legs looks like neon - this medium sized bird is yellow in - this particular bird has a white belly and breasts and red - a medium black and black bird with a medium sized bird with gray wings and a long pointed beak - a small black bird has a small bill - the bird is brown and white bird has fat feathers with brown is black - this is a black bird is yellow and has dark brown colors is green with a small head and is blue - a bird with small straight bill is brown with patches of blue white - this bird has a short bill green belly and sides and has white LeakGAN: - this bird has wings that are black and yellow and has a red belly - this bird has a brown crown as well as a white belly - this bird has wings that are black and has a grey belly - this bird has a small , black body and neck , a black crown , grey and white wings and a long curved beak - this bird has a black crown , a black bill , and a white breast - this bird has a white belly , a red breast , and a white breast - this bird has wings that are black and has a yellow belly - this bird has a white crown as well as a red chest - a small bird with a gray head , white belly , black beak , and a white belly - this bird has a long , straight bill , a green cheek patch and a white breast - this bird has a red crown with grey belly - this is a small bird with a black wing and a long beak - a bird with a small beak that is punctuated by a blue head - this bird has a orange crown with a black throat and a black crown - a small bird with a white breast and brown body - a small bird with a brown crown , and a black crown - a bird with a white underside and black cheek patches and a long pointed bill - this bird has a yellow belly and breast with a black crown and short pointy bill - this bird has wings that are black and white and has a yellow crown - the bird has a yellow belly and a small bill FM-GAN: - this bird is white with orange on its neck - this particular bird has a belly that is yellow and brown with a long skinny bill - this large bird has a long neck a short black downward curved beak - this tiny yellow bird has a brown speckled back and black feet - a large sized swimming bird that has a green patch around its long bill - the bird has brown plumage and a large yellow beak - this yellow bird has a pointy beak with a long wing - this bird has a spotted belly back a white belly and eyebrows and a brown back and head with a pointy black bill - this bird is white with a brown eye ring - a colorful bird with a blue head and curved sharp black bill - a yellow bird with speckled brow and red eyes and a speckled breast - this bird is completely brown with a short skinny bill - the bird has a short grayish triangular shaped crown and bright green covert on the rest of its speckled body - this bird has a small bill black eyes with a darker brown cheek patch and a white breast with a darker color - this brown bird has long black tail feathers round crown - a very small round bird whose body is round and dark colors - a tall bird with a short bill a yellow eyebrow and a white crown - this bird is black with an orange belly and orange sides - a small bird with a unique set of black blue eyebrow stripe and tail with a white stripe over and eyebrow - a bird with a fat body with orange yellow black and yellow and striped top of feathers

Table 14: Generated samples from CUB dataset using different models.
TextGAN: - a display store with colorful chairs with no leash as they can walk
- a bathroom with many signs on it and many businesses
- a motorcycle in white shirt texts on reading of paper
- a family dressed looking toward the large mans reflection in the corner
- a cat that is standing on the corner of water
- two giraffes hang around with palm trees
- a woman sits at the phone while sitting holding onto her cell phone in an office
- a young girl stands outside with no parking umbrella
- a red bus drives around in front of an automobile station
- a crosswalk is on both sides
- a pair of walk sign with pink and flowers in it
- a living area with seating area for sale sign
- a small aircraft with landed at the water
- a closeup of a big boat being driven by the water
- a smiling group of travel down in london
- a motorcycle area with benches with many sides of
- a sea park bench being displayed on a sunny day
- a kitchen filled with cluttered pots and decorated clutter
- several young adults with skis and posing for a cone
- a number of workers cooking in the kitchen
LeakGAN: - a number of signs hanging from a car
- a train near a building that is sitting on the road
- a number of people on a sidewalk with a street sign
- a train traveling down train tracks while families approaches the nurse appears
- a couple of elephants that are walking through a grassy field
- a couple of giraffes that are walking over the dirt road
- a group of people walking along side to cross the street
- a bird is flying over a domed chimney
- a couple of buses driving down a street
- a small bird sitting on a tree limb at the beach
- a group of people standing on top of a train next to slue of tracks
- a fire hydrant has growing off the side of it
- a bunch of people that are walking and other signs
- a man and a child standing outside a window of a store
- a man holding a stuffed bear levers leaves
- a train that is sitting on the train tracks
- a brown and orange train that is pouring out into the distance
- a man is sitting on a saddled horse
- a weird dozen bicycles are on a city street
- some people are on a fire hydrant on the street
FM-GAN: - a man sitting around a laptop and holding something
- a large group of people walking on a sidewalk
- a white cat is placed on the hood of a bus stop
- trucks drive down a town area near a building
- two teddy bears and other items on top of a street
- a man is laying in a chair looking at a subway
- a motorcycle is sitting at a street with bags
- a plate of steak is cutting a sauce on it
- a woman sitting on top of a computer on a table
- a group of red sheep on a mountain range
- a man holding a cell phone and small black bowl
- a couple of people riding on a dirt horse next to a tree
- a living room is outside of the kitchen
- a tall building is under a blue plane
- a man standing on a white motorcycle with an umbrella stands in the snow
- a goat is standing next to the water
- a cat looking in a bathtub next to a sink
- a yellow bus on street curb near a sidewalk
- the giraffes are walking by their zoo enclosure in the distance
- there are many people in public office together
Table 15: Generated samples from COCO dataset using different models.

-7mm LeakGAN: - the government is not perfect survivors to negotiating matter with a huge amount of protection and work , " she said . - i was always delighted that my principal colleague has been able to do it at the moment , and it was out for the family . - i don ’ t know if consumers regard to defend what they have to do , " he said at the front door . - she added : " migrants infected 92 species have been in a wake - up - and the number of levels they would expect to get a fee to review their own . - the uk has thought israel orders complicated figures and access its short - term tax cuts to the u . s . single security capital . - the new zealand president waves heavily turkish stations , and the united states have been the first largest post - the market in a decade . - the australian dollar , meanwhile – benchmark nsw has to attract the older consumers in 2015 . - the united states will carpet alert ankara waves in april state and washington ’ s following the u . s . military has ordered the labour party . - " i think there recognised israel stability document , " as an adviser , where he was in the defense , and in the south china sea . - these are our first latest reasons why renewable energy will lead to the world , we grow up in growth and winning the market . - the north korean company approved revenue pace worldwide and financial options and its earnings is an increase in 2 . 8 billion , compared to 2 . 3 million per year . - the company ’ s shares rebel frequent export measures to impose a 50 - billion dollar to help china ’ s interest rates . - the average results , benchmark forecast pace heavily in last season , in 2017 , then it will be common - even and above the standard record . - the data combined , meanwhile that venezuela infected are more than 0 . 3 percent compared with 1 . 4 million to cut out their highest wages . - the spokesman said : " migrants infected orders to the islamic state , and the u . s . security council said , the other person will receive more than 15 years. FM-GAN: - I think about the quality of life and we ’ ve got to see you just , and you ’ re not happy with you and I ’ m sure . - The company said the " biggest decision is to be used to provide its own content , which is the biggest test . " - I am not in connection with her parents at the age of the girl and she was walking home with her father . - It is a huge opportunity to have a lot of time in the world , but we have to see the family of the community in it . - It was a great thing for us to be in the case , but I am not sure it was . - In the past , they are not aware of the case , which they are not available to them , according to the researchers read . - The US , South Korea has been able to provide a new social network from the community and also working together . - Women ’ s health services have also seen as a report from the UK ’ s largest population in 2015 , according to Reuters . - Officials who have already had to support from the state to meet with them in the past few years , he has already said . - It ’ s been clear from this year , so we ’ ve been working with many people . - I have to be able to make sure you are to make a difference in the community , where they are working with their own care of what they can . - The video was a video of a study found that the child had been diagnosed with the same , but the video is found . - Clinton has been far more than 200 million in July since 2012 , according to the US Secretary of State . - Russia is expected to be a significant increase in the EU referendum on the EU , which has been announced by the EU referendum . - " I think that people are trying to find out of the incident ," she told The Sun on the phone and not to be heard .

Table 16: Generated samples from EMNLP WMT dataset using different models.