Adversarial Feature Matching for Text Generation

06/12/2017 ∙ by Yizhe Zhang, et al. ∙ 0

The Generative Adversarial Network (GAN) has achieved great success in generating realistic (real-valued) synthetic data. However, convergence issues and difficulties dealing with discrete data hinder the applicability of GAN to text. We propose a framework for generating realistic text via adversarial training. We employ a long short-term memory network as generator, and a convolutional network as discriminator. Instead of using the standard objective of GAN, we propose matching the high-dimensional latent feature distributions of real and synthetic sentences, via a kernelized discrepancy metric. This eases adversarial training by alleviating the mode-collapsing problem. Our experiments show superior performance in quantitative evaluation, and demonstrate that our model can generate realistic-looking sentences.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generating meaningful and coherent sentences is central to many natural language processing applications. The general idea is to estimate a distribution over sentences from a corpus, then use it to sample realistic-looking sentences. This task is important because it enables generation of

novel sentences that preserve the semantic and syntactic properties of real-world sentences, while being potentially different from any of the examples used to estimate the model. For instance, in the context of dialog generation, it is desirable to generate answers that are more diverse and less generic (Li et al., 2016).

One simple approach consists of first learning a latent space to represent (fixed-length) sentences using an encoder-decoder (autoencoder) framework based on Recurrent Neural Networks (RNNs)

(Cho et al., 2014; Sutskever et al., 2014), then generate synthetic sentences by decoding random samples from this latent space. However, this approach often fails to generate realistic sentences from arbitrary latent representations. The reason for this is that, when mapping sentences to their latent representations using an autoencoder, the mappings usually cover a small but structured region of the latent space, which corresponds to a manifold embedding (Bowman et al., 2016). In practice, most regions of the latent space do not necessarily map (decode) to realistic sentences. Consequently, randomly sampling latent representations often yields nonsensical sentences. Recent work by Bowman et al. (2016) has attempted to generate more diverse sentences via RNN-based variational autoencoders. However, they did not address the fundamental problem that the posterior distribution over latent variables does not appropriately cover the latent space.

Another underlying challenge of generating realistic text relates to the nature of the RNN. During inference, the RNN generates words in sequence from previously generated words, contrary to learning, where ground-truth words are used every time. As a result, error accumulates proportional to the length of the sequence, i.e., the first few words look reasonable, however, quality deteriorates quickly as the sentence progresses. Bengio et al. (2015) coined this phenomenon exposure bias. Toward addressing this problem, Bengio et al. (2015) proposed the scheduled sampling approach. However, Huszár (2015) showed that scheduled sampling is a fundamentally inconsistent training strategy, in that it produces largely unstable results in practice.

The Generative Adversarial Network (GAN) (Goodfellow et al., 2014) is an appealing and natural answer to the above issues. GAN matches the distributions of synthetic and real data by introducing an adversarial game between a generator and a discriminator. The GAN objective seeks to constitute a generator, that functionally maps samples from a given (simple) prior distribution, to synthetic data that appear to be realistic. The GAN setup explicitly seeks that the latent representations from real data (via encoding) be distributed in a manner consistent with the specified prior (e.g., Gaussian or uniform). Due to the nature of adversarial training, the discriminator compares real and synthetic sentences, rather than their individual words, which in principle should alleviate the exposure-bias issue. Recent work (Lamb et al., 2016) has incorporated an additional discriminator to train a sequence-to-sequence language model that better preserves long-term dependencies.

Effort has also been made to generate realistic-looking sentences via adversarial training. For instance, by borrowing ideas from reinforcement learning,

Yu et al. (2017); Li et al. (2017) treat the sentence generation as a sequential decision making process. Despite the success of these methods, two fundamental problems of the GAN framework limit their use in practice: (i) the generator tends to produce a single observation for multiple latent representations, i.e., mode collapsing (Metz et al., 2017), and (ii) the generator’s contribution to the learning signal is insubstantial when the discriminator is close to its local optimum, i.e., vanishing gradient behavior (Arjovsky & Bottou, 2017).

In this paper we propose a new framework, TextGAN, to alleviate the problems associated with generating realistic-looking sentences via GAN. Specifically, the Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997)

RNN is used as generator, and the Convolutional Neural Network (CNN) 

(Kim, 2014)

is used as discriminator. We consider a kernel-based moment-matching scheme over a Reproducing Kernel Hilbert Space (RKHS), to force the empirical distributions of real and synthetic sentences to have matched moments in latent-feature space. As a consequence, our approach ameliorates the mode-collapsing issue associated with standard GAN training. This strategy encourages the model to learn representations that are both informative of the original sentences (via the autoencoder) and discriminative w.r.t. synthetic sentences (via the discriminator). We also propose several complementary techniques, including initialization strategies and discretization approximations to ease GAN training, and to achieve superior performance compared to related approaches.

2 Model

2.1 Generative Adversarial Networks

GAN (Goodfellow et al., 2014) aims to obtain the equilibrium of the following optimization objective

(1)

where is maximized w.r.t. and minimized w.r.t. . Note that the first term of (1) does not depend on . Observed (real) data, , are sampled from empirical distribution . The latent code, , that feeds into the generator, , is drawn from a simple prior distribution . When the discriminator is optimal, solving this adversarial game is equivalent to minimizing the Jenson-Shannon Divergence (JSD) (Arjovsky & Bottou, 2017) between the real data distribution and the synthetic data distribution , where (Goodfellow et al., 2014). However, in most cases, the saddle-point solution of the objective in (1) is intractable. Therefore, a procedure to iteratively update and is often applied.

Arjovsky & Bottou (2017) pointed out that the standard GAN objective in (1) suffers from an unstably weak learning signal when the discriminator gets close to local optimal, due to the gradient-vanishing effect. This is because the JSD implied by the original GAN loss becomes a constant if and share no support, thus minimizing the JSD yields no learning signal. This problem also exists in the recently proposed energy-based GAN (EBGAN) (Zhao et al., 2017)

, as the distance metric implied by EBGAN is the Total Variance Distance (TVD), which has the same issue w.r.t. JSD, as shown by

Arjovsky et al. (2017).

2.2 TextGAN

Given a sentence corpus , instead of directly optimizing the objective from standard GAN in (1), we adopt an approach that is similar to the feature matching scheme of Salimans et al. (2016). Specifically, we consider the objective

(2)
(3)

where and are iteratively maximized w.r.t and minimized w.r.t. , respectively. is the standard objective of GAN in (1). is the Euclidean distance between the reconstructed latent code, , and the original code, , drawn from prior distribution . We denote the synthetic sentences as , where . represents the Maximum Mean Discrepancy (MMD) (Gretton et al., 2012) between the empirical distribution of sentence embeddings and , for synthetic and real data, respectively. The model framework is illustrated in Figure 1 and detailed below.

Figure 1: Model scheme of TextGAN. Latent codes are fed through a generator , to produce synthetic sentence . Synthetic and real sentences ( and ) are fed into a binary discriminator , for real vs. fake (synthetic) prediction, and also for latent code reconstruction . and represent features of and , respectively.

We first consider in (3). The generator attempts to adjust itself to produce synthetic sentence , with features , encoded by , to mimic the real sentence features (also encoded by ). This is achieved by matching the empirical distributions of and via the MMD objective.

Concisely, MMD measures the mean squared difference between two sets of samples and , where , , , , is the dimensionality of the samples, and and are sample sizes for and , respectively. The MMD metric characterizes the differences between and over a Reproducing Kernel Hilbert Space (RKHS), , associated with kernel function . The kernel can be written as an inner product over : , and is denoted as the feature mapping (Gretton et al., 2012). Formally, the MMD for two empirical distributions and is given by

(4)

Note that reaches its minimum when the two empirical distributions and (in general) match exactly. For example, with a polynomial kernel, , minimizing can be understood as matching moments of two empirical distributions up to order . With a universal kernel like the Gaussian kernel, , with bandwidth , minimizing the MMD objective will match moments of all orders (Gretton et al., 2012). Here, we use MMD to match the empirical distribution of and using a Gaussian kernel.

The adversarial discriminator associated with the loss in (2) aims to produce sentence features that are most discriminative, representative and challenging. These aims are explicitly represented as the three components of (2), namely, (i) requires and to be discriminative of real and synthesized sentences; (ii) requires and to preserve maximum reconstruction information for the latent code that generates synthetic sentences; and (iii) forces to select the most challenging features for the generator to match.

In the situation for which simple features are enough for the discrimination/reconstruction task, this additional loss seeks to estimate complex features that are difficult for the current generator, thus improving in terms of generation ability. In our experience, we find the reconstruction and MMD loss in D serve as regularizer to the binary classification loss, in that by adding these losses, discriminator features tend to be more spread-out in the feature space.

In summary, the adversarial game associated with (2) and (3) is the following: attempts to select informative sentence features, while aims to match these features. Parameters and act as trade-off between discrimination ability, and reconstruction and moment matching precision, respectively. We argue that this framework has several advantages over the standard GAN objective in (1).

The original GAN objective has been shown to be prone to mode collapsing, especially when the so-called alternative for the generator loss is used (Metz et al., 2017), i.e., replacing the second term of (1) by . This is because when is used, fake-looking samples are penalized more severely than less diverse samples (Arjovsky & Bottou, 2017), thus grossly underestimating the variance of latent features. The loss in (3), on the other hand, forces the generator to produce highly diverse sentences to match the variation of real sentences, by latent moment matching, thus alleviating the mode-collapsing problem. We believe that leveraging MMD is general enough to be useful as a framework in other data domains, e.g., images. Presumably, the discrete nature of text data makes standard GAN prone to mode-collapsing. This is manifested by close neighbors in latent code space producing the same text output. In our approach, MMD and feature matching are introduced to alleviate mode collapsing with text data as motivating domain. However, whether such an objective is free from the convergence issues of the standard GAN, due to vanishing gradient from the generator, is known to be problem specific (Arjovsky & Bottou, 2017).

Arjovsky & Bottou (2017) demonstrated that JSD yields weak gradient signals when the real and synthetic data are far apart. To deliver stable gradients, a smoother distance metric over the data domain is required. In (4), we are essentially employing a Neural Network (NN) embedding via Gaussian kernel for matching and , i.e., , where denotes the NN embedding that maps from the data to the feature domain. Under the assumption that is a bijective mapping, i.e.

, distinct sentences have different embedded feature vectors, in the Supplementary Material we prove that if the original kernel function

is universal, the composed kernel is also universal. As shown in Gretton et al. (2012), the MMD is a proper metric when the kernel is universal. In fact, if the kernel function is universal, the MMD metric will be no worse than TVD in terms of vanishing gradients (Arjovsky et al., 2017)

. However, if the bandwidth of the kernel is too small, much smaller than the average distance between data points, the vanishing gradient problem remains

(Arjovsky et al., 2017).

Additionally, seeking to match the sentence features provides a more achievable and informative objective than directly trying to mislead the discriminator as in standard GAN. Specifically, the loss in (3

) implies a clearer aim for the generator, as it requires matching the latent features (distribution-wise) as opposed to uniquely trying to fake a binary classifier.

Note that if the latent features from real and synthetic data have similar distributions it is unlikely that the discriminator, that uses these features as inputs, will be able to tell them apart. Implementation-wise, the updating signal from the generator does not need to propagate all the way back from the discriminator, but rather directly from the features layer, thus less prone to fading. We believe there may be other possible approaches for text generation using GAN, however, we hope to provide a first attempt toward overcoming some of the difficulties associated with it.

2.3 Alternative (data efficient) objectives

One limitation of the proposed approach is that the dimensionality of features and could be much larger than the size of the subset of data (minibatch) used during learning, hence the empirical distribution may not be sufficiently representative. In fact, a reliable Gaussian kernel MMD two-sample test generally requires the size of the minibatch to be proportional to the number of dimensions (Ramdas et al., 2014). To alleviate this issue, we consider two strategies.

Compressing network

We map and into a lower-dimensional feature space using a compressing network with fully connected layers, also learned by . This is sensible because the discriminator will still encourage the most challenging features to be abstracted (compressed) from the original features and . This approach provides significant computational savings, as computation of the MMD in (4) scales with , where denotes the dimensionality of the feature vector. However, a lower-dimensional mapping may miss valuable information. Besides, finding the optimal mapping dimension may be difficult in practice. There exists a tradeoff between fast estimation and a richer feature vector, by setting appropriately.

Gaussian covariance matching

We could also avoid using the kernel trick, as was used in (4). Instead, we can replace by (below), where we accumulate (Gaussian) sufficient statistics from multiple minibatches, thus alleviating the inadequate-minibatch-size issue. Specifically,

(5)

where and represent the covariance matrices of synthetic and real sentence feature vectors and , respectively. and denote the mean vectors of and , respectively. By setting , (5) reduces to the first-moment feature matching technique from Salimans et al. (2016). Note that this loss

is an upper bound of the JSD (omitting constant, proved in the Supplementary Material) between two multivariate Gaussian distribution

and , which is more tractable than directly minimizing JSD. The feature vectors used in (5

) are the neural net outputs before applying any non-linear activation function. We note that the Gaussian assumption may still be strong in many cases. In practice, we use a moving average of the most recent

minibatches for estimating all sufficient statistics and . Further, and are initialized to be to prevent numerical problems.

Figure 2: Top: CNN-based sentence discriminator/encoder. Bottom: LSTM sentence generator.

2.4 Model specification

Let denote the -th word in sentence . Each word is embedded into a -dimensional word vector , where is a (learned) word embedding matrix, is the vocabulary size, and notation denotes the -th column of matrix .

CNN discriminator

We use the CNN architecture in Kim (2014); Collobert et al. (2011)

for sentence encoding. It consists of a convolution layer and a max-pooling operation over the entire sentence for each feature map. A sentence of length

(padded where necessary) is represented as a matrix

, by concatenating its word embeddings as columns, i.e., the -th column of is .

As shown in Figure 2(top), a convolution operation involves a filter , applied to a window of words to produce a new feature. Following Collobert et al. (2011), we induce a latent feature map , where is a nonlinear activation function (we use the hyperbolic tangent, tanh),

is a bias vector, and

denotes the convolutional operator. We then apply a max-over-time pooling operation (Collobert et al., 2011) to the feature map and take its maximum value, i.e., , as the feature corresponding to this particular filter. Convolving the same filter with the -gram at every position in the sentence allows features to be extracted independently of their position in the sentence. This pooling scheme tries to capture the most salient feature, i.e., the one with the highest value, for each feature map, effectively filtering out less informative compositions of words. Further, this pooling scheme also guarantees that the extracted features are independent of the length of the input sentence.

The above process describes how one feature is extracted from one filter. In practice, the model uses multiple filters with varying window sizes. Each filter can be considered as a linguistic feature detector that learns to recognize a specific class of -grams. Assume we have window sizes, and for each window size, we use filters, then we obtain a -dimensional vector to represent a sentence. On top of this

-dimensional feature vector, we specify a softmax layer to map the input sentence to an output

, representing the probability of

being from the data distribution (real), rather than from the adversarial generator (synthesized).

There are other CNN architectures in the literature (Kalchbrenner et al., 2014; Hu et al., 2014; Johnson & Zhang, 2015). We adopt the CNN model of Kim (2014); Collobert et al. (2011) due to its simplicity and excellent performance on sentence classification tasks.

LSTM generator

We specify an LSTM generator to translate a latent code vector, , into a synthetic sentence . This is illustrated in Figure 2(bottom). The probability of a length- sentence, , given the encoded feature vector, , is defined as

(6)

where denotes the -th generated token. Specifically, we generate the first word , deterministically from , with , where . Bias terms are omitted for simplicity. All other words in the sentence are sequentially generated using the RNN, based on previously generated words, until the end-sentence symbol is generated. The -th word is generated as , where , and the hidden units are recursively updated through . is a weight matrix used for computing a distribution over words. The input for the -th step is the embedding vector of the previous generated word , i.e.,

(7)

The synthetic sentence is deterministically obtained given by concatenating the generated words. In experiments, the transition function, , is implemented with an LSTM (Hochreiter & Schmidhuber, 1997). Details are provided in the Supplementary Material.

2.5 Training Techniques

Soft-argmax approximation

To train the generator , which contains discrete variables, direct application of the gradient estimation may be difficult (Yu et al., 2017). Score-function-based approaches, such as the REINFORCE algorithm (Williams, 1992), achieve unbiased gradient estimation for discrete variables using Monte Carlo estimation. However, in our experiments, we found that the variance of the gradient estimation is very large, which is consistent with Maddison et al. (2017). Here we consider a soft-argmax operator (Zhang et al., 2016), i.e., when performing learning, as an approximation to (7):

(8)

where represents the element-wise product. Note that when , this approximation approaches (7).

Pre-training

Previous literature (Goodfellow et al., 2014; Salimans et al., 2016) has discussed the fundamental difficulty of training GANs using gradient-based methods. In general, gradient descent optimization schemes may fail to converge to the equilibrium by moving along the orbit trajectory among saddle points (Salimans et al., 2016). Intuitively, good initialization can facilitate convergence. Toward this end, we initialize the LSTM parameters of the generator by pre-training a standard CNN-LSTM autoencoder (Gan et al., 2016). For the discriminator/encoder initialization, we use a permutation training strategy. For each sentence in the corpus, we randomly swap two words to construct a slightly tweaked sentence counterpart. The discriminator is pre-trained to distinguish the tweaked sentences from the true sentences. The swapping operation is preferred here because it constitutes a much more challenging task for the discriminator to learn, compared to adding or deleting words, where the structure of real sentences is more strongly disrupted, thus making it easier for the discriminator. The permutation pre-training is important because it requires the discriminator to learn features characteristic of sentences’ long dependencies. We empirically found this provides a better initialization (compared to no pre-training) for the discriminator to learn good features.

We also utilized other training techniques to stabilize training, such as soft-labeling (Salimans et al., 2016). Details of these are provided in the Supplementary Material.

3 Related Work

Generative Moment Matching Networks (GMMNs) (Dziugaite et al., 2015; Li et al., 2015) are closely related to our approach. However, these methods either directly match the empirical distribution in the data domain, or extract features using a pre-trained autoencoder (Li et al., 2015). If the goal is to perform matching in the data domain when generating sentences, the dimensionality of input data would be (higher than 10,000 in our case). Note that the minibatch size required to obtain reasonable statistical power grows linearly with the number of dimension (Ramdas et al., 2014), and the computational cost of MMD grows quadratically with the size of data points. Therefore, directly applying GMMNs is often computationally prohibitive. Furthermore, directly matching in the data domain via GMMNs implies word-by-word discrepancy, which yields less smooth gradients. This happens because a word-by-word discrepancy ignores sentence structure. For example, two sentences “a boy is swimming” and “boy is swimming” will be far apart in a word-by-word metric, when they are indeed close in a sentence-by-sentence feature space.

A two-step method, where a feature encoder is generated first as in Li et al. (2015) helps alleviate the problems above. However, in Li et al. (2015) the feature encoder is fixed once pre-trained, limiting the potential to adjust features during the training phase. Alternatively, our approach matches the real and synthetic data on a sentence feature space, where features are dynamically and adversarially adapted to focus on the most challenging features for the generator to mimic. In addition, features are designed to maintain both discrimination and reconstruction ability, instead of merely focusing on reconstruction as in Li et al. (2015).

Recent work considered combining autoencoders or variational autoencoders (Kingma & Welling, 2014) with GAN (Zhao et al., 2017; Larsen et al., 2016; Makhzani et al., 2015; Mescheder et al., 2017; Wang & Liu, 2016). They demonstrated superior performance on image generation. Our approach is similar to these approaches; however, we attempt to learn the reconstruction of the latent code, instead of the input data (sentences). Donahue et al. (2017); Dumoulin et al. (2016) learned a reverse mapping from data space to latent space. In our approach we enforce the discriminator and encoder to share a latent structure, with the aim of learning a representation for both discrimination and latent code reconstruction. Chen et al. (2016) maximized the mutual information between the generated data and the latent codes by leveraging a network-adapted variational proposal distribution. In our case, we minimize the distance between the original and reconstructed latent codes.

Our approach attempts to minimize a NN-based embedded MMD distance of two empirical distributions. Aside from MMD, kernel-based discrepancy metrics such as kernelized Stein discrepancy (Liu et al., 2016; Wang & Liu, 2016) have been shown to be computationally tractable, while maintaining statistical power. We leave the investigation of using Stein for moment matching as a promising future direction. Wasserstein GAN (Arjovsky et al., 2017) considers an Earth-Mover (EM) distance of the real data and synthetic data distribution, instead of the JSD as in standard GAN (Goodfellow et al., 2014) or TVD as in Zhao et al. (2017). The EM metric yields stable gradients, thus avoiding the collapsing mode and vanishing gradient problem of the latter two. We note that our approach is equivalent to minimizing a MMD loss over the data domain, however, with a NN-based embedded Gaussian kernel. As shown in Arjovsky et al. (2017), MMD is a proper metric when the kernel is universal. Because of the similarity of the conditions, our approach enjoys the advantages of Wasserstein GAN, namely, ameliorating the gradient vanishing problems.

4 Experiments

Data and Experimental Setup

Our model is trained using a combination of two datasets: (i) the BookCorpus dataset (Zhu et al., 2015), which consists of 70 million sentences from over 7000 books; and (ii) the ArXiv dataset, which consists of 5 million sentences from abstracts of papers from various subjects, obtained from the arXiv website. The motivation for merging two different corpora is to investigate whether the model can generate sentences that integrate both scientific and informal writing styles. We randomly choose 0.5 million sentences from BookCorpus and 0.5 million sentences from arXiv to construct training and validation sets, i.e., 1 million sentences for each. For testing, we randomly select 25,000 sentences from both corpus, for a total of 50,000 sentences.

We train the generator and discriminator/encoder iteratively. Provided that the LSTM generator typically involves more parameters and is more difficult to train than the CNN discriminator, we perform one optimization step for the discriminator for every steps of the generator. We use a mixture of 5 isotropic Gaussian (RBF) kernels with different bandwidths as in Li et al. (2015). Bandwidth parameters are selected to be close to the median distance (in our case around 20) of feature vectors encoded from real sentences. and are selected based on the performance on the validation set. The validation performance is evaluated by loss of generator and corpus-level BLEU score (Papineni et al., 2002), described below.

For the CNN discriminator/encoder, we use filter windows () of sizes {3,4,5} with 300 feature maps each, hence each sentence is represented as a 900-dimensional vector. The dimensionality of and is also 900. The feature vector is then fed into a 900-200-2 fully connected network for the discriminator and 900-900-900 for encoder, with sigmoid activation units connecting the intermediate layers and softmax/tanh units for the top layer of discriminator/encoder. We did not observe performance changes by adding dropout. For the LSTM sentence generator, we use one hidden layer of 500 units.

Gradients are clipped if the norm of the parameter vector exceeds 5 (Sutskever et al., 2014). Adam (Kingma & Ba, 2015) with learning rate for both discriminator and generator is utilized for optimization. The size of the minibatch is set to 256.

Both the generator and the discriminator are pre-trained using the strategies described in Section 2. We also employed a warm-up

training during the first two epochs, as we found it improves convergence during the initial stage of learning. Specifically, we use a mean-matching objective for the generator loss,

i.e., , as in Salimans et al. (2016)

. Further details of the experimental design are provided in the the Supplementary Material. All experiments are implemented in Theano 

(Bastien et al., 2012), using one NVIDIA GeForce GTX TITAN X GPU with 12GB memory. The model was trained for 50 epochs in roughly 3 days. Learning curves are shown in the Supplementary Material.

Figure 3: Moment matching comparison. Left: expectations of latent features from real vs. synthetic data. Right: elements of vs. , for real and synthetic data, respectively.

Matching feature distributions

We first examine the generator’s ability to produce synthetic features similar to those obtained from real data. For this purpose, we calculate the empirical expectation of the 900-dimensional sentence feature vector over 2,000 real sentences and 2,000 synthetic sentences. As shown in Figure 3(left), the expectation of these 900 feature dimensions from synthetic sentences matches well with the feature expectation from the real sentences. We also compared the estimated covariance matrix elements (including off-diagonal elements and diagonal elements) from real data against the covariance matrix elements estimated from synthetic data, in Figure 3(right). We observe that the covariance structure of the 900-dimensional features from real and synthetic sentences in general match well. The full covariance matrices for real and synthetic sentences are provided in the Supplementary Material. We observe that the (mapped) synthetic features nicely cover the real sentence features density, while “completing” other areas of low density.

Quantitative comparison

We evaluate the generated-sentence quality using the BLEU score (Papineni et al., 2002)

and Kernel Density Estimation (KDE), as in

Goodfellow et al. (2014); Nowozin et al. (2016). For comparison, we consider textGAN with 4 different loss objectives: Mean Matching (MM) as in Salimans et al. (2016), Covariance Matching (CM) as in (5), MMD and MMD with compressed network (MMD-L), by mapping the original 900-dimensional features to 200-dimensional, as described in Section 2.3. We also compare to a baseline autoencoder (AE) model. The AE uses a CNN as encoder and an LSTM as decoder, where the CNN and LSTM network structures are set to be identical as the CNN and LSTM used in textGAN. We finally consider a Variational Autoencoder (VAE) implemented as in Bowman et al. (2016). To train the VAE model, we use annealing to gradually increase the KL divergence between the prior and approximated posterior. The details are provided in the the Supplementary Material. We also compare with seqGAN (Yu et al., 2017). For seqGAN we follow the authors’ guidelines of running 350 pre-training epochs followed by 50 discriminator training epochs, to generate 320 sentences. For AE, VAE and textGAN, we first uniformly sample 320 latent codes from the latent code space, and use the corresponding generator (or decoder, in the AE/VAE case) to generate sentences.

BLEU-4 BLEU-3 BLEU-2 KDE(nats)
AE 0.010.01 0.110.02 0.390.02 272742
VAE 0.020.02 0.160.03 0.540.03 189225
seqGAN 0.040.04 0.300.08 0.670.04 201953
textGAN(MM) 0.090.04 0.420.04 0.770.03 182350
textGAN(CM) 0.120.03 0.490.06 0.840.02 168641
textGAN(MMD) 0.130.05 0.490.06 0.830.04 168838
textGAN(MMD-L) 0.110.05 0.520.07 0.850.04 168444
Table 1: Quantitative results using BLEU-2,3,4 and KDE.

For BLEU score evaluation, we follow the strategy in Yu et al. (2017) of using the entire test set as the reference. For KDE evaluation, the lengths of the generated sentences are different, thus we first embed all the sentences to a 900-dimensional vector. Since no standard sentence encoder is available, we use the encoder learned from AE. The covariance matrix for the Parzen kernel in KDE is set to be the covariance of feature vectors for real tested sentences. Despite the fact that the KDE approach, as a log-likelihood estimator tends to have high variance (Theis et al., 2016), the KDE score tracks well with our BLEU score evaluation.

The results are shown in Table 1. MMD and MMD-L generally score higher in sentences quality. MMD-L seems better at capturing 2-grams (BLEU-2), while MMD outperforms MMD-L in 4-grams (BLEU-4). We also observed that when using CM, the generated sentences tend to be shorter than MMD (not shown).

Generated sentences

Table 2 shows six sentences generated by textGAN. Note that the generated sentences seem to be able to produce novel phrases by imagining concept combinations, e.g., in Table 2(b,c,f), or to borrow words from a different corpus to compose novel sentences, e.g., in Table 2(d,e). In many cases, it learns to automatically match the parentheses and quotation marks, e.g., in Table 2(a), and can synthesize relatively long sentences, e.g., in 2(a,f). In general, the synthetic sentences seem syntactically reasonable. However, the semantic meaning is less well preserved especially in sentence of more than 20 words, e.g., in Table 2(e,f).

a we show the joint likelihood estimator ( in a large number of estimating
variables embedded on the subspace learning ) .
b this problem achieves less interesting choices of convergence guarantees

on turing machine learning .

c in hidden markov relational spaces , the random walk feature
decomposition is unique generalized parametric mappings.
d i see those primitives specifying a deterministic probabilistic machine
learning algorithm .
e i wanted in alone in a gene expression dataset which do n’t form phantom
action values .
f as opposite to a set of fuzzy modelling algorithm , pruning is performed
using a template representing network structures .
Table 2: Sentences generated by textGAN.

We observe that the discriminator can still sufficiently distinguish the synthetic sentences from the real ones (the probability to predict synthetic data as real is around 0.05), even when the synthetic sentences seems to perserve reasonable grammatical structure and use proper wording. It is likely that the CNN is able to accurately characterize the semantic meaning and differentiate sentences, while the generator may get trapped into a local optimum, where any slight modification would result in a higher loss (3) for the generator. Presumably, long-range distance features are not difficult to abstract by the discriminator/encoder, however, is less likely to be imitated by the generator. One promising direction is to leverage reinforcement learning strategies as in Yu et al. (2017), where the updating for LSTM can be more effectively steered. Nevertheless, investigation on how to improve the the long-range behavior is left as interesting future work.

Latent feature space trajectories

Following Bowman et al. (2016), we further empirically evaluate whether the latent variable space can “densely” encode sentences. We visualize the transition from one sentence to another by constructing a linear path between two randomly selected points in latent feature space, to then generate the intermediate sentences along the linear trajectory. For comparison, a baseline autoencoder (AE) is trained for 20 epochs. The results for textGAN and AE are presented in Table 3. Compared to AE, the sentences produced by textGAN are generally more syntactically and semantically reasonable. The transition suggest “smoothness” and interpretability, however, the wording choices and sentence structure showed dramatic changes in some regions in the latent feature space. This seems to indicate that local “transition smoothness” varies from region to region.

textGAN AE
A our methods apply novel approaches to solve modeling tasks .
- our methods apply novel approaches to solve modeling . our methods apply to train UNK models involving complex .
- our methods apply two different approaches to solve computing . our methods solve use to train ) .
- our methods achieves some different approaches to solve computing . our approach show UNK to models exist .
- our methods achieves the best expert structure detection . that supervised algorithms show to UNK speed .
- the methods have been different related tasks . that address algorithms to handle ) .
- the guy is the minimum of UNK . that address versions to be used in .
- the guy is n’t easy tonight . i believe the means of this attempt to cope .
- i believe the guy is n’t smart okay? i believe it ’s we be used to get .
- i believe the guy is n’t smart . i believe it i ’m a way to belong .
B i believe i ’m going to get out .
Table 3: Intermediate sentences produced from linear transition between two points (A and B) in the latent feature space. Each sentence is generated from a latent point on a linear path.

5 Conclusion

We have introduced a novel approach for text generation using adversarial training, termed TextGAN, and have discussed several techniques to specify and train such a model. We demonstrated that the proposed model delivers superior performance compared to related approaches, can produce realistic sentences, and that the learned latent representation space can “smoothly” encode plausible sentences. We quantitatively evaluate the proposed methods with baseline models and existing methods. The results indicate superior performance of TextGAN.

In future work, we will attempt to apply conditional GAN models (Mirza & Osindero, 2014) to disentangle the latent representations for different writing styles. This would enable a smooth lexical and grammatical transition between different writing styles. It would be also interesting to generate text by conditioning on observed images (Pu et al., 2016). In addition, we plan to leverage an additional refining stage where a reverse-order LSTM (Graves & Schmidhuber, 2005) is applied after the sentence is first generated, to produce sentences with better long-term semantical interpretation.

Acknowledgments

This research was supported by ARO, DARPA, DOE, NGA, ONR and NSF.

References