Code for the paper "Adversarially Regularized Autoencoders for Generating Discrete Structures" by Zhao, Kim, Zhang, Rush and LeCun
While autoencoders are a key technique in representation learning for continuous structures, such as images or wave forms, developing general-purpose autoencoders for discrete structures, such as text sequence or discretized images, has proven to be more challenging. In particular, discrete inputs make it more difficult to learn a smooth encoder that preserves the complex local relationships in the input space. In this work, we propose an adversarially regularized autoencoder (ARAE) with the goal of learning more robust discrete-space representations. ARAE jointly trains both a rich discrete-space encoder, such as an RNN, and a simpler continuous space generator function, while using generative adversarial network (GAN) training to constrain the distributions to be similar. This method yields a smoother contracted code space that maps similar inputs to nearby codes, and also an implicit latent variable GAN model for generation. Experiments on text and discretized images demonstrate that the GAN model produces clean interpolations and captures the multimodality of the original space, and that the autoencoder produces improve- ments in semi-supervised learning as well as state-of-the-art results in unaligned text style transfer task using only a shared continuous-space representation.READ FULL TEXT VIEW PDF
In this paper, we describe the "implicit autoencoder" (IAE), a generativ...
Regularized autoencoders learn the latent codes, a structure with the
Applying generative adversarial networks (GANs) to text-related tasks is...
Autoencoder networks are unsupervised approaches aiming at combining
This paper presents a GAN for generating images of handwritten lines
We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously
We present an architecture that generates medical texts while learning a...
Code for the paper "Adversarially Regularized Autoencoders for Generating Discrete Structures" by Zhao, Kim, Zhang, Rush and LeCun
Recent work on regularized autoencoders, such as variational (Kingma & Welling, 2014; Rezende et al., 2014) and denoising (Vincent et al., 2008) variants, has shown significant progress in learning smooth representations of complex, high-dimensional continuous data such as images. These code-space representations facilitate the ability to apply smoother transformations in latent space in order to produce complex modifications of generated outputs, while still remaining on the data manifold.
Unfortunately, learning similar latent representations of discrete structures, such as text sequences or discretized images, remains a challenging problem. Initial work on VAEs for text has shown that optimization is difficult, as the decoder can easily degenerate into a unconditional language model (Bowman et al., 2015b). Recent work on generative adversarial networks (GANs) for text has mostly focused on getting around the use of discrete structures either through policy gradient methods (Che et al., 2017; Hjelm et al., 2017; Yu et al., 2017) or with the Gumbel-Softmax distribution (Kusner & Hernandez-Lobato, 2016). However, neither approach can yet produce robust representations directly.
A major difficulty of discrete autoencoders is mapping a discrete structure to a continuous code vector while also smoothly capturing the complex local relationships of the input space. Inspired by recent work combining pretrained autoencoders with deep latent variable models, we propose to target this issue with an adversarially regularized autoencoder (ARAE). Specifically we jointly train a discrete structure encoder and continuous space generator, while constraining the two models with a discriminator to agree in distribution. This approach allows us to utilize a complex encoder model, such as an RNN, and still constrain it with a very flexible, but more limited generator distribution. The full model can be then used as a smoother discrete structure autoencoder or as a latent variable GAN model where a sample can be decoded, with the same decoder, to a discrete output. Since the system produces a single continuous coded representation—in contrast to methods that act on each RNN state—it can easily be further regularized with problem-specific invariants, for instance to learn to ignore style, sentiment or other attributes for transfer tasks.
Experiments apply ARAE to discretized images and sentences, and demonstrate that the key properties of the model. Using the latent variable model (ARAE-GAN), the model is able to generate varied samples that can be quantitatively shown to cover the input spaces and to generate consistent image and sentence manipulations by moving around in the latent space via interpolation and offset vector arithmetic. Using the discrete encoder, the model can be used in a semi-supervised setting to give improvement in a sentence inference task. When the ARAE model is trained with task-specific adversarial regularization, the model improves the current best results on sentiment transfer reported in Shen et al. (2017) and produces compelling outputs on a topic transfer task using only a single shared code space. All outputs are listed in the Appendix 9 and code is available at (removed for review).
In practice unregularized autoencoders often learn a degenerate identity mapping where the latent code space is free of any structure, so it is necessary to apply some method of regularization. A popular approach is to regularize through an explicit prior on the code space and use a variational approximation to the posterior, leading to a family of models called variational autoencoders (VAE) (Kingma & Welling, 2014; Rezende et al., 2014). Unfortunately VAEs for discrete text sequences can be challenging to train—for example, if the training procedure is not carefully tuned with techniques like word dropout and KL annealing (Bowman et al., 2015b), the decoder simply becomes a language model and ignores the latent code (although there has been some recent successes with convolutional models (Semeniuta et al., 2017; Yang et al., 2017)). One possible reason for the difficulty in training VAEs is due to the strictness of the prior (usually a spherical Gaussian) and/or the parameterization of the posterior. There has been some work on making the prior/posterior more flexible through explicit parameterization (Rezende & Mohamed, 2015; Kingma et al., 2016; Chen et al., 2017). A notable technique is adversarial autoencoders (AAE) (Makhzani et al., 2015) which attempt to imbue the model with a more flexible prior implicitly through adversarial training. In AAE framework, the discriminator is trained to distinguish between samples from a fixed prior distribution and the input encoding, thereby pushing the code distribution to match the prior. While this adds more flexibility, it has similar issues for modeling text sequences and suffers from mode-collapse in our experiments. Our approach has similar motivation, but notably we do not sample from a fixed prior distribution—our ‘prior’ is instead parameterized through a flexible generator. Nonetheless, this view (which has been observed by various researchers (Tran et al., 2017; Mescheder et al., 2017; Makhzani & Frey, 2017)) provides an interesting connection between VAEs and GANs.
The success of GANs on images have led many researchers to consider applying GANs to discrete data such as text. Policy gradient methods are a natural way to deal with the resulting non-differentiable generator objective when training directly in discrete space (Glynn, 1987; Williams, 1992). When trained on text data however, such methods often require pre-training/co-training with a maximum likelihood (i.e. language modeling) objective (Che et al., 2017; Yu et al., 2017; Li et al., 2017). This precludes there being a latent encoding of the sentence, and is also a potential disadvantage of existing language models (which can otherwise generate locally-coherent samples). Another direction of work has been through reparameterizing the categorical distribution with the Gumbel-Softmax trick (Jang et al., 2017; Maddison et al., 2017)—while initial experiments were encouraging on a synthetic task (Kusner & Hernandez-Lobato, 2016), scaling them to work on natural language is a challenging open problem. There has also been a flurry of recent, related approaches that work directly with the soft outputs from a generator (Gulrajani et al., 2017; Sai Rajeswar, 2017; Shen et al., 2017; Press et al., 2017). For example, Shen et al. (Shen et al., 2017) exploits adversarial loss for unaligned style transfer between text by having the discriminator act on the RNN hidden states and using the soft outputs at each step as input to an RNN generator, utilizing the Professor-forcing framework (Lamb et al., 2016). Our approach instead works entirely in code space and does not require utilizing RNN hidden states directly.
Define to be a set of discrete structures where is a vocabulary of symbols and
to be a distribution over this space. For instance, for binarized imagesand is the number of pixels, while for sentences is the vocabulary and is the sentence length. A discrete autoencoder consists of two parameterized functions: a deterministic encoder function with parameters that maps from input to code space and a conditional decoder distribution over structures with parameters . The parameters are trained on a cross-entropy reconstruction loss:
The choice of the encoder and decoder parameterization is specific to the structure of interest, for example we use RNNs for sequences. We use the notation, for the (approximate) decoder mode. When the autoencoder is said to perfectly reconstruct .
GANs are a class of parameterized implicit generative models (Goodfellow et al., 2014). The method approximates drawing samples from a true distribution by instead employing a latent variable and a parameterized deterministic generator function to produce samples . Initial work on GANs minimizes the Jensen-Shannon divergence between the distributions. Recent work on Wasserstein GAN (WGAN) (Arjovsky et al., 2017), replaces this with the Earth-Mover (Wasserstein-1) distance.
GAN training utilizes two separate models: a generator maps a latent vector from some easy-to-sample source distribution to a sample and a critic/discriminator aims to distinguish real data and generated samples from . Informally, the generator is trained to fool the critic, and the critic to tell real from generated. WGAN training uses the following min-max optimization over generator parameters and critic parameters ,
where denotes the critic function, is obtained from the generator, , and and are real and generated distributions. If the critic parameters are restricted to an 1-Lipschitz function set , this term correspond to minimizing Wasserstein-1 distance . We use a naive approximation to enforce this property by weight-clipping, i.e. (Arjovsky et al., 2017).
Ideally, a discrete autoencoder should be able to reconstruct from , but also smoothly assign similar codes and to similar and
. For continuous autoencoders, this property can be enforced directly through explicit regularization. For instance, contractive autoencoders(Rifai et al., 2011) regularize their loss by the functional smoothness of . However, this criteria does not apply when inputs are discrete and we lack even a metric on the input space. How can we enforce that similar discrete structures map to nearby codes?
Adversarially regularized autoencoders target this issue by learning a parallel continuous-space generator with a restricted functional form to act as a smoother reference encoding. The joint objective regularizes the autoencoder to constrain the discrete encoder to agree in distribution with its continuous counterpart:
Above is the Wasserstein-1 distance between the distribution of codes from the discrete encoder model ( where ) and is the distribution of codes from the continuous generator model ( for some , e.g. ). To approximate Wasserstein-1 term, the function includes an embedded critic function which is optimized adversarially to the encoder and generator as described in the background. The full model is shown in Figure 1.
To train the model, we use a block coordinate descent to alternate between optimizing different parts of the model: (1) the encoder and decoder to minimize reconstruction loss, (2) the WGAN critic function to approximate the term, (3) the encoder and generator to adversarially fool the critic to minimize :
The full training algorithm is shown in Algorithm 1.
One benefit of the ARAE framework is that it compresses the input to a single code vector. This framework makes it ideal for manipulating discrete objects while in continuous code space. For example, consider the problem of unaligned transfer, where we want to change an attribute of a discrete input without supervised examples, e.g. to change the topic or sentiment of a sentence. First, we extend the decoder to condition on a transfer variable denoting this attribute which is known during training, to learn . Next, we train the code space to be invariant to this attribute, to force it to be learned fully by the decoder. Specifically, we further regularize the code space to map similar with different attribute labels
near enough to fool a code space attribute classifier, i.e.:
where is the loss of a classifier from code space to labels (in our experiments we always set ). To incorporate this additional regularization, we simply add two more gradient update steps: (2b) training a classifier to discriminate codes, and (3b) adversarially training the encoder to fool this classifier. The algorithm is shown in Algorithm 2. Note that similar technique has been introduced in other domains, notably in images (Lample et al., 2017) and video modeling (Denton & Birodkar, 2017).
We experiment with three different ARAE models: (1) an autoencoder for discretized images trained on the binarized version of MNIST, (2) an autoencoder for text sequences trained using the Stanford Natural Language Inference (SNLI) corpus(Bowman et al., 2015a), and (3) an autoencoder trained for text transfer (Section 6.2) based on the Yelp and Yahoo datasets for unaligned sentiment and topic transfer. All three models utilize the same generator architecture, . The generator architecture uses a low dimensional with a Gaussian prior , and maps it to . Both the critic and the generator are parameterized as feed-forward MLPs.
The image model uses fully-connected NN to autoencode binarized images. Here where is the image size. The encoder used is a feed-forward MLP network mapping from , . The decoder predicts each pixel in
as a parameterized logistic regression,where .
model uses a recurrent neural network (RNN) for both the encoder and decoder. Herewhere is the sentence length and is the vocabulary of the underlying language. Define an RNN as a parameterized recurrent function for (with ) that maps a discrete input structure to hidden vectors . For the encoder, we define . For decoding we feed as an additional input to the decoder RNN at each time step, i.e. , and further calculate the distribution over at each time step via softmax, where and are parameters (part of ). Finding the most likely sequence under this distribution is intractable, but it is possible to approximate it using greedy search or beam search. In our experiments we use an LSTM architecture (Hochreiter & Schmidhuber, 1997) for both the encoder/decoder and decode using greedy search. The text transfer model uses the same architecture as the text model but extends it with a code space classifier which is modeled using an MLP and trained to minimize cross-entropy.
Our baselines utilize a standard autoencoder (AE) and the cross-aligned autoencoder (Shen et al., 2017) for transfer. Note that in both our ARAE and standard AE experiments, the encoded code from the encoder is normalized to lie on the unit sphere, and the generated code is bounded to lie in by the function at output layer. We additionally experimented with the sequence VAE introduced by Bowman et al. (2015b) and the adversarial autoencoder (AAE) model (Makhzani et al., 2015) on the SNLI dataset. However despite extensive parameter tuning we found that neither model was able to learn meaningful latent representations—the VAE simply ignored the latent code and the AAE experienced mode-collapse and repeatedly generated the same samples. The Appendix 12
includes detailed descriptions of the hyperparameters, model architecture, and training regimes.
Our experiments consider three aspects of the model. First we measure the empirical impact of regularization on the autoencoder. Next we apply the discrete autoencoder to two applications, unaligned style transfer and semi-supervised learning. Finally we employ the learned generator network as an implicit latent variable model (ARAE-GAN) over discrete sequences.
Our main goal for ARAE is to regularize the model produce a smoother encoder by requiring the distribution from the encoder to match the distribution from the continuous generator over a simple latent variable. To examine this claim we consider two basic statistical properties of the code space during training of the text model on SNLI, shown in Figure 2. On the left, we see that the norm of and code converge quickly in ARAE training. The encoder code is always restricted to be on the unit sphere, and the generated code
quickly learns to match it. The middle plot shows the convergence of the trace of the covariance matrix between the generator and the encoder as training progresses. We find that variance of the encoder and the generator match after several epochs. To check the smoothness of the model, for both ARAE/AE, we take a sentence and calculate the average cosine similarity of 100 randomly-selected sentences that had an edit-distance of at most 5 to the original sentence. We do this for 250 sentences and calculate the mean of the average cosine similarity. Figure 2 (right) shows that the cosine similarity of nearby sentences is quite high for the ARAE than in the case for the AE. Edit-distance is not an ideal proxy for similarity in sentences, but it is often a sufficient condition.
Finally an ideal representation should be robust to small changes of the input around the training examples in code space (Rifai et al., 2011). We can test this property by feeding a noised input to the encoder and (i) calculating the score given to the original input, and (ii) checking the reconstructions. Table 1 (right) shows an experiment for text where we add noise by permuting words in each sentence. We observe that the ARAE is able to map a noised sentence to a natural sentence, (though not necessarily the denoised sentence). Table 1 (left) shows empirical results for these experiments. We obtain the reconstruction error (i.e. negative log likelihood) of the original (non-noised) sentence under the decoder, utilizing the noised code. We find that when
(i.e. no swaps), the regular AE better reconstructs the input as expected. However, as we increase the number of swaps and push the input further away from the data manifold, the ARAE is more likely to produce the original sentence. We note that unlike denoising autoencoders which require a domain-specific noising function(Hill et al., 2016; Vincent et al., 2008), the ARAE is not explicitly trained to denoise an input, but learns to do so as a byproduct of adversarial regularization.
A smooth autoencoder combined with low reconstruction error should make it possible to more robustly manipulate discrete objects through code space without dropping off the data manifold. To test this hypothesis, we experimented with two unaligned text transfer tasks. For these tasks, we attempt to change one attribute of a sentence without aligned examples of this change. To perform this transfer, we learn a code space that can represent an input that is agnostic to this attribute, and a decoder that can incorporate the attribute (as described in Section 4). We experiment with unaligned transfer of sentiment on the Yelp corpus and topic on the Yahoo corpus (Zhang et al., 2015).
For sentiment we follow the same setup as Shen et al. (2017) and split the Yelp corpus into two sets of unaligned positive and negative reviews. We train an ARAE as an autoencoder with two separate decoders, one for positive and one for negative sentiment, and incorporate adversarial training of the encoder to remove sentiment information from the code space. We test by encoding in sentences of one class and decoding, greedily, with the opposite decoder.
Our evaluation is based on four automatic metrics, shown in Table 2: (i) Transfer: measuring how successful the model is at transferring sentiment based on an automatic classifier (we use the fastText library (Joulin et al., 2016)
). (ii) BLEU: measuring the consistency between the transferred text and the original. We expect the model to maintain as much information as possible and transfer only the style; (iii) Perplexity: measuring the fluency of the generated text; (iv) Reverse Perplexity: measuring the extent to which the generations are representative of the underlying data distribution.111This reverse perplexity is calculated by training a language model on the generated data and measuring perplexity on held-out, real data (i.e. reverse of regular perplexity). We also found this metric to be helpful for early-stopping based on validation data. Both perplexity numbers are obtained by training an RNN language model.
We additionally perform human evaluations on the cross-aligned AE and our best ARAE model. We randomly select 1000 sentences (500/500 positive/negative), obtain the corresponding transfers from both models, and ask Amazon Mechanical Turkers to evaluate the sentiment (Positive/Neutral/Negative) and naturalness (1-5, 5 being most natural) of the transferred sentences. We create a separate task in which we show the Turkers the original and the transferred sentences, and ask them to evaluate the similarity based on sentence structure (1-5, 5 being most similar). We explicitly ask the Turkers to disregard sentiment in their similarity assessment.
In addition to comparing against the cross-aligned AE of Shen et al. (2017), we also compare against a vanilla AE trained without adversarial regularization. For ARAE, we experimented with different weighting on the adversarial loss (see section 4) with . We generally set . Experimentally the adversarial regularization enhances transfer and perplexity, but tends to make the transferred text less similar to the original, compared to the AE. Some randomly selected sentences are shown in figure 6 and more samples are shown available in Appendix 9.
|Automatic Evaluation||Human Evaluation|
Experiments on sentiment transfer. Left shows the automatic metrics (Transfer/BLEU/PPL/Reverse PPL) while right shows human evaluation metrics (Transfer/Similarity/Naturalness). Cross-Aligned AE is fromShen et al. (2017)
The same method can be applied to other style transfer tasks, for instance the more challenging Yahoo QA data (Zhang et al., 2015). For Yahoo we chose 3 relatively distinct topic classes for transfer: Science & Math, Entertainment & Music, and Politics & Government. As the dataset contains both questions and answers, we separated our experiments into titles (questions) and replies (answers). The qualitative results are showed in table 4. See Appendix 9 for additional generation examples.
We further utilize ARAE in a standard AE setup for semi-supervised training. We experiment on a natural language inference task, shown in Table 5 (right). We use 22.2%, 10.8% and 5.25% of the original labeled training data, and use the rest of the training set for unlabeled training. The labeled set is randomly picked. The full SNLI training set contains 543k sentence pairs, and we use supervised sets of 120k, 59k and 28k sentence pairs respectively for the three settings. As a baseline we use an AE trained on the additional data, similar to the setting explored in Dai & Le (2015). For ARAE we use the subset of unsupervised data of length , which roughly includes 655k single sentences (due to the length restriction, this is a subset of 715k sentences that were used for AE training). As observed by Dai & Le (2015), training on unlabeled data with an AE objective improves upon a model just trained on labeled data. Training with adversarial regularization provides further gains.
|Data for LM||Reverse PPL|
After training, an ARAE can also be used as an implicit latent variable model controlled by and the generator , which we refer to as ARAE-GAN. While models of this form have been widely used for generation in other modalities, they have been less effective for discrete structures. In this section, we attempt to measure the effectiveness of this induced discrete GAN.
A common test for a GANs ability mimic the true distribution is to train a simple model on generated samples from . While there are pitfalls of this evaluation (Theis et al., 2016), it provides a starting point for text modeling. Here we generate 100k samples from (i) ARAE-GAN, (ii) an AE222To “sample” from an AE we fit a multivariate Gaussian to the code space after training and generate code vectors from this Gaussian to decode back into sentence space., (iii) a RNN LM trained on the same data, and (iv) the real training set (samples from the models are shown in Appendix 10). All models are of the same size to allow for fair comparison. We train an RNN language model on generated samples and evaluate on held-out data to calculate the reverse perplexity. As can be seen from Table 5, training on real data (understandably) outperforms training on generated data by a large margin. Surprisingly however, we find that a language model trained on ARAE-GAN data performs slightly better than one trained on LM-generated/AE-generated data. We further found that the reverse PPL of an AAE (Makhzani et al., 2015) was quite high () due to mode-collapse.
Another property of GANs (and VAEs) is that the Gaussian form of
induces the ability to smoothly interpolate between outputs by exploiting the structure of the latent space. While language models may provide a better estimate of the underlying probability space, constructing this style of interpolation would require combinatorial search, which makes this a useful feature of text GANs. We experiment with this property by sampling two pointsand from and constructing intermediary points . For each we generate the argmax output . The samples are shown in Figure 3 (left) for text and in Figure 3 (right) for a discretized MNIST ARAE-GAN.
A final intriguing property of image GANs is the ability to move in the latent space via offset vectors (similar to the case with word vectors (Mikolov et al., 2013)). For example, Radford et al. (Radford et al., 2016) observe that when the mean latent vector for “men with glasses” is subtracted from the mean latent vector for “men without glasses” and applied to an image of a “woman without glasses”, the resulting image is that of a “woman with glasses”. To experiment with this property we generate 1 million sentences from the ARAE-GAN and compute vector transforms in this space to attempt to change main verbs, subjects and modifier (details in Appendix 11). Some examples of successful transformations are shown in Figure 4 (right). Quantitative evaluation of the success of the vector transformations is given in Figure 4 (left).
|A man in a tie is sleeping and clapping on balloons .||A man in a tie is clapping and walking dogs .|
|A person is standing in the air beneath a criminal .||A person is walking in the air beneath a pickup .|
|The jewish boy is trying to stay out of his skateboard .||The jewish man is trying to stay out of his horse .|
|The people works in a new uniform studio .||A man works in a new studio uniform .|
|Some child head a playing plastic with drink .||Two children playing a head with plastic drink .|
|A baby workers is watching steak with the water .||Two workers watching baby steak with the grass .|
|The people shine or looks into an area .||The dog arrives or looks into an area .|
|The boy ’s babies is wearing a huge factory .||The dog ’s babies is wearing a huge ears .|
|A women are walking outside near a man .||Three women are standing near a man walking .|
|The dogs are sleeping in front of the dinner .||Two dogs are standing in front of the dinner .|
|A side child listening to a piece with steps playing on a table .||Several child playing a guitar on side with a table .|
|Two children are working in red shirt at the cold field .||Several children working in red shirt are cold at the field .|
We present adversarially regularized autoencoders, as a simple approach for training a discrete structure autoencoder jointly with a code-space generative adversarial network. The model learns a improved autoencoder as demonstrated by semi-supervised experiments and improvements on text transfer experiments. It also learns a useful generative model for text that exhibits a robust latent space, as demonstrated by natural interpolations and vector arithmetic. We do note that (as has been frequently observed when training GANs) our model seemed to be quite sensitive to hyperparameters. Finally, while many useful models for text generation already exist, text GANs provide a qualitatively different approach influenced by the underlying latent variable structure. We envision that such a framework could be extended to a conditional setting, combined with other existing decoding schemes, or used to provide a more interpretable model of language.
Learning distributed representations of sentences from unlabelled data.In Proceedings of NAACL, 2016.
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables.In Proceedings of ICLR, 2017.
Contractive Auto-Encoders: Explicit Invariance During Feature Extraction.In Proceedings of ICML, 2011.
Simple Statistical Gradient-following Algorithms for Connectionist Reinforcement Learning.Machine Learning, 8, 1992.
One can interpret the ARAE framework as a dual pathway network mapping two distinct distributions into a similar one; and both output code vectors that are kept similar in terms of Wasserstein distance as measured by the critic. We provide the following proposition showing that under our parameterization of the encoder and the generator, as the Wasserstein distance converges, the encoder distribution () converges to the generator distribution (
), and further, their moments converge.
This is ideal since under our setting the generated distribution is simpler than the encoded distribution, because the input to the generator is from a simple distribution (e.g. spherical Gaussian) and the generator possesses less capacity than the encoder. However, it is not so simple that it is overly restrictive (e.g. as in VAEs). Empirically we observe that the first and second moments do indeed converge as training progresses (Section 6.1).
Let be a distribution on a compact set , and be a sequence of distributions on . Further suppose that . Then the following statements hold:
(i.e. convergence in distribution).
All moments converge, i.e. for all ,
for all such that
(i) has been proved in Villani (2008) Theorem 6.9.
For (ii), using The Portmanteau Theorem, (i) is equivalent to:
for all bounded and continuous function : , where
is the dimension of the random variable.
The -th moment of a distribution is given by
Our encoded code is bounded as we normalize the encoder output to lie on the unit sphere, and our generated code is also bounded to lie in by the function. Hence is a bounded continuous function for all . Therefore,
We generate 1 million sentences from the ARAE-GAN and parse the sentences to obtain the main verb, subject, and modifier. Then for a given sentence, to change the main verb we subtract the mean latent vector for all other sentences with the same main verb (in the first example in Figure 4 this would correspond to all sentences that had “sleeping” as the main verb) and add the mean latent vector for all sentences that have the desired transformation (with the running example this would be all sentences whose main verb was “walking”). We do the same to transform the subject and the modifier. We decode back into sentence space with the transformed latent vector via sampling from . Some examples of successful transformations are shown in Figure 4 (right). Quantitative evaluation of the success of the vector transformations is given in Figure 4 (left). For each original vector we sample 100 sentences from over the transformed new latent vector and consider it a match if any of the sentences demonstrate the desired transformation. Match % is proportion of original vectors that yield a match post transformation. As we ideally want the generated samples to only differ in the specified transformation, we also calculate the average word precision against the original sentence (Prec) for any match.
The encoder is a three-layer MLP, 784-800-400-100.
Additive Gaussian noise is added into
which is then fed into the decoder. The standard deviation of that noise is initialized to be, and then exponentially decayed to .
The decoder is a four-layer MLP, 100-400-800-1000-784
The autoencoder is optimized by Adam, with learning rate 5e-04.
An MLP critic 100-100-60-20-1 with weight clipping . The critic is trained by 10 iterations within each GAN loop.
Both components of GAN is optimized by Adam, with learning rate 5e-04 on the generator, and 5e-05 on the critic.
Weighing factor .
The encoder is an one-layer LSTM with 300 hidden units.
Gaussian noise into before feeding it into the decoder. The standard deviation of that noise is initialized to be , and then exponentially decayed every 100 iterations by a factor of .
The decoder is a one-layer LSTM with 300 hidden units.
The decoding process at each time step takes the top layer LSTM hidden state and concatenates it with the hidden codes
, before feeding them into the output (i.e. vocabulary projection) and the softmax layer.
The word embedding is of size 300.
We adopt a grad clipping on the encoder/decoder, with max grad_norm = 1.
The encoder/decoder is optimized by vanilla SGD with learning rate 1.
An MLP generator 100-300-300, using batch normalization, and ReLU non-linearity.
An MLP critic 300-300-1 with weight clipping . The critic is trained by 5 iterations within each GAN loop.
Both components of GAN are optimized by Adam, with learning rate 5e-05 on the generator, and 1e-05 on the critic.
We increment the number of GAN training loop333The GAN training loop refers to how many times we train GAN in each entire training loop (one training loop contains training autoencoder for one loop, and training GAN for one or several). by (it initially is set to ) , respectively at the beginning of epoch #2, epoch #4 and epoch #6.
Similar to the SNLI generation experiment setup, with the following changes:
We employ larger network to GAN components: MLP generator 100-150-300-500 and MLP critic 500-500-150-80-20-1 with weight clipping factor . The critic is trained by 10 iterations within each GAN loop.
Similar to the SNLI setup, with the following changes
The encoder and decoder size are both increased to 500 hidden units.
The style adversarial classifier is an MLP with structure 300-200-100, with learning rate trained with SGD.
We employ both larger generator and discriminator architectures in GAN: generator 200-400-800 with dim being set to ; discriminator 300-160-80-20.
Weighing factor for critic gradient , .
No GAN loop scheduling is employed here.