Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them. Crucially, the local SVI procedure is itself differentiable, so the inference network and generative model can be trained end-to-end with gradient-based optimization. This semi-amortized approach enables the use of rich generative models without experiencing the posterior-collapse phenomenon common in training VAEs for problems like text generation. Experiments show this approach outperforms strong autoregressive and variational baselines on standard text and image datasets.READ FULL TEXT VIEW PDF
Variational inference (VI) (Jordan et al., 1999; Wainwright & Jordan, 2008) is a framework for approximating an intractable distribution by optimizing over a family of tractable surrogates. Traditional VI algorithms iterate over the observed data and update the variational parameters with closed-form coordinate ascent updates that exploit conditional conjugacy (Ghahramani & Beal, 2001). This style of optimization is challenging to extend to large datasets and non-conjugate models. However, recent advances in stochastic (Hoffman et al., 2013), black-box (Ranganath et al., 2014, 2016), and amortized (Mnih & Gregor, 2014; Kingma & Welling, 2014; Rezende et al., 2014) variational inference have made it possible to scale to large datasets and rich, non-conjugate models (see Blei et al. (2017), Zhang et al. (2017) for a review of modern methods).
In stochastic variational inference (SVI), the variational parameters for each data point are randomly initialized and then optimized to maximize the evidence lower bound (ELBO) with, for example, gradient ascent. These updates are based on a subset of the data, making it possible to scale the approach. In amortized variational inference (AVI), the local variational parameters are instead predicted by an inference (or recognition) network, which is shared (i.e. amortized) across the dataset. Variational autoencoders (VAEs) are deep generative models that utilize AVI for inference and jointly train the generative model alongside the inference network.
SVI gives good local (i.e. instance-specific) distributions within the variational family but requires performing optimization for each data point. AVI has fast inference, but having the variational parameters be a parametric function of the input may be too strict of a restriction. As a secondary effect this may militate against learning a good generative model since its parameters may be updated based on suboptimal variational parameters. Cremer et al. (2018) observe that the amortization gap (the gap between the log-likelihood and the ELBO due to amortization) can be significant for VAEs, especially on complex datasets.
Recent work has targeted this amortization gap by combining amortized inference with iterative refinement during training (Hjelm et al., 2016; Krishnan et al., 2018). These methods use an encoder to initialize the local variational parameters, and then subsequently run an iterative procedure to refine them. To train with this hybrid approach, they utilize a separate training time objective. For example Hjelm et al. (2016) train the inference network to minimize the KL-divergence between the initial and the final variational distributions, while Krishnan et al. (2018) train the inference network with the usual ELBO objective based on the initial variational distribution.
In this work, we address the train/test objective mismatch and consider methods for training semi-amortized variational autoencoders (SA-VAE) in a fully end-to-end manner. We propose an approach that leverages differentiable optimization (Domke, 2012; Maclaurin et al., 2015; Belanger et al., 2017) and differentiates through
SVI while training the inference network/generative model. We find that this method is able to both improve estimation of variational parameters and produce better generative models.
We apply our approach to train deep generative models of text and images, and observe that they outperform autoregressive/VAE/SVI baselines, in addition to direct baselines that combine VAE with SVI but do not perform end-to-end training. We also find that under our framework, we are able to utilize a powerful generative model without experiencing the “posterior-collapse” phenomenon often observed in VAEs, wherein the variational posterior collapses to the prior and the generative model ignores the latent variable (Bowman et al., 2016; Chen et al., 2017; Zhao et al., 2017). This problem has particularly made it very difficult to utilize VAEs for text, an important open issue in the field. With SA-VAE, we are able to outperform an LSTM language model by utilizing an LSTM generative model that maintains non-trivial latent representations. Code is available at https://github.com/harvardnlp/sa-vae.
Let be a scalar valued function with partitioned inputs such that . With a slight abuse of notation we define . We denote to be the -th block of the gradient of evaluated at , and further use to denote the total derivative of with respect to , which exists if is a differentiable function of . Note that in general since other components of could be a function of .111This will indeed be the case in our approach: when we calculate , is a function of the data point , the generative model , and the inference network (Section 3). We also let be the matrix formed by taking the -th group of rows and the -th group of columns of the Hessian of evaluated at . These definitions generalize straightforwardly when
is a vector-valued function (e.g.).222Total derivatives/Jacobians are usually denoted with row vectors but we denote them with column vectors for clearer notation.
Consider the following generative process for ,
where is the prior and is given by a generative model with parameters . As maximizing the log-likelihood is usually intractable, variational inference instead defines a variational family of distributions parameterized by and maximizes the evidence lower bound (ELBO)
The variational posterior, , is said to collapse to the prior if . In the general case we are given a dataset and need to find variational parameters and generative model parameters that jointly maximize .
We can apply SVI (Hoffman et al., 2013) with gradient ascent to approximately maximize the above objective:333While we describe the various algorithms for a specific data point, in practice we use mini-batches.
For , set
Update based on
Here is the number of SVI iterations and is the learning rate. (Note that is updated based on the gradient and not the total derivative . The latter would take into account the fact that is a function of for .)
SVI optimizes directly for instance-specific variational distributions, but may require running iterative inference for a large number of steps. Further, because of this block coordinate ascent approach the variational parameters are optimized separately from , potentially making it difficult for to adapt to local optima.
AVI uses a global parametric model to predict the local variational parameters for each data point. A particularly popular application of AVI is in training the variational autoencoder (VAE)(Kingma & Welling, 2014), which runs an inference network (i.e. encoder) parameterized by over the input to obtain the variational parameters:
Update based on (which in this case is equal to the total derivative)
Update based on
The inference network is learned jointly alongside the generative model with the same loss function, allowing the pair to coadapt. Additionally inference for AVI involves running the inference network over the input, which is usually much faster than running iterative optimization on the ELBO. Despite these benefits, requiring the variational parameters to be a parametric function of the input may be too strict of a restriction and can lead to an amortization gap. This gap can propagate forward to hinder the learning of the generative model ifis updated based on suboptimal .
Semi-amortized variational autoencoders (SA-VAE) utilize an inference network over the input to give the initial variational parameters, and subsequently run SVI to refine them. One might appeal to the universal approximation theorem (Hornik et al., 1989) and question the necessity of additional SVI steps given a rich-enough inference network. However, in practice we find that the variational parameters found from VAE are usually not optimal even with a powerful inference network, and the amortization gap can be significant especially on complex datasets (Cremer et al., 2018; Krishnan et al., 2018).
SA-VAE models are trained using a combination of AVI and SVI steps:
For , set
Update based on
Update based on
Note that for training we need to compute the total derivative of the final ELBO with respect to
(i.e. steps 4 and 5 above). Unlike with AVI, in order to update the encoder and generative model parameters, this total derivative requires backpropagating through the SVI updates. Specifically this requires backpropagating through gradient ascent(Domke, 2012; Maclaurin et al., 2015).
Following past work, this backpropagation step can be done efficiently with fast Hessian-vector products (LeCun et al., 1993; Pearlmutter, 1994). In particular, consider the case where we perform one step of refinement, , and for brevity let . To backpropagate through this, we receive the derivative
and use the chain rule,
We can then backpropagate through the inference network to calculate the total derivative, i.e. . Similar rules can be used to derive .444We refer the reader to Domke (2012) for the full derivation. The full forward/backward step, which uses gradient descent with momentum on the negative ELBO, is shown in Algorithm 1.
In our implementation we calculate Hessian-vector products with finite differences (LeCun et al., 1993; Domke, 2012), which was found to be more memory-efficient than automatic differentiation (and therefore crucial for scaling our approach to rich inference networks/generative models). Specifically, we estimate with
where is some small number (we use ).555Since in our case the ELBO is a non-deterministic function due to sampling (and dropout, if applicable), care must be taken when calculating Hessian-vector product with finite differences to ensure that the source of randomness is the same when calculating the two gradient expressions. We further clip the results (i.e. rescale the results if the norm exceeds a threshold) before and after each Hessian-vector product as well as during SVI, which helped mitigate exploding gradients and further gave better training signal to the inference network.666 Without gradient clipping, in addition to numerical issues we empirically observed the model to degenerate to a case whereby it learned to rely too much on iterative inference, and thus the initial parameters from the inference network were poor. Another way to provide better signal to the inference network is to train against a weighted sum
Without gradient clipping, in addition to numerical issues we empirically observed the model to degenerate to a case whereby it learned to rely too much on iterative inference, and thus the initial parameters from the inference network were poor. Another way to provide better signal to the inference network is to train against a weighted sumfor . See Appendix A for details.
We apply our approach to train generative models on a synthetic dataset in addition to text/images. For all experiments we utilize stochastic gradient descent with momentum on the negative ELBO. Our prior is the spherical Gaussian
and the variational posterior is diagonal Gaussian, where the variational parameters are given by the mean vector and the diagonal log variance vector, i.e..
In preliminary experiments we also experimented with natural gradients, other optimization algorithms, and learning the learning rates, but found that these did not significantly improve results. Full details regarding hyperparameters/model architectures for all experiments are in Appendix B.
We first apply our approach to a synthetic dataset where we have access to the true underlying generative model of discrete sequences. We generate synthetic sequential data according to the following oracle generative process with 2-dimensional latent variables and :
We initialize the LSTM/MLP randomly as , where the LSTM has a single layer with hidden state/input dimension equal to 100. We generate for 5 time steps (so each example is given by ) with a vocabulary size of 1000 for each . Training set consists of 5000 points. See Appendix B.1 for the exact setup.
We fix this oracle generative model and learn an inference network (also a one-layer LSTM) with VAE and SA-VAE.777With a fixed oracle, these models are technically not VAEs as VAE usually implies that the the generative model is learned (alongside the encoder). For a randomly selected test point, we plot the ELBO landscape in Figure 1 as a function of the variational posterior means () learned from the different methods. For SVI/SA-VAE we run iterative optimization for 20 steps. Finally we also show the optimal variational parameters found from grid search.
|Model||Oracle Gen||Learned Gen|
|True NLL (Est)|
As can be seen from Figure 1, the variational parameters from running SA-VAE are closest to the optimum while those obtained from SVI and VAE are slightly further away. In Table 1 we show the variational upper bounds (i.e. negative ELBO) on the negative log-likelihood (NLL) from training the various models with both the oracle/learned generative model, and find that SA-VAE outperforms VAE/SVI in both cases.
The next set of experiments is focused on text modeling on the Yahoo questions corpus from Yang et al. (2017). Text modeling with deep generative models has been a challenging problem, and few approaches have been shown to produce rich generative models that do not collapse to standard language models. Ideally a deep generative model trained with variational inference would make use of the latent space (i.e. maintain a nonzero KL term) while accurately modeling the underlying distribution.
Our architecture and hyperparameters are identical to the LSTM-VAE baselines considered in Yang et al. (2017), except that we train with SGD instead of Adam, which was found to perform better for training LSTMs. Specifically, both the inference network and the generative model are one-layer LSTMs with 1024 hidden units and 512-dimensional word embeddings. The last hidden state of the encoder is used to predict the vector of variational posterior means/log variances. The sample from the variational posterior is used to predict the initial hidden state of the generative LSTM and additionally fed as input at each time step. The latent variable is 32-dimensional. Following previous works (Bowman et al., 2016; Sønderby et al., 2016; Yang et al., 2017)
, for all the variational models we utilize a KL-cost annealing strategy whereby the multiplier on the KL term is increased linearly from 0.1 to 1.0 each batch over 10 epochs. Appendix B.2 has the full architecture/hyperparameters.
In addition to autoregressive/VAE/SVI baselines, we consider two other approaches that also combine amortized inference with iterative refinement. The first approach is from Krishnan et al. (2018), where the generative model takes a gradient step based on the final variational parameters and the inference network takes a gradient step based on the initial variational parameters, i.e. we update based on and update based on . The forward step (steps 1-3 in Section 3) is identical to SA-VAE. We refer to this baseline as VAE + SVI.
In the second approach, based on Salakhutdinov & Larochelle (2010) and Hjelm et al. (2016), we train the inference network to minimize the KL-divergence between the initial and the final variational distributions, keeping the latter fixed. Specifically, letting , we update based on and update based on . Note that the inference network is not updated based on , which would take into account the fact that both and are functions of . We found to perform better than the reverse direction . We refer to this setup as VAE + SVI + KL.
|LSTM-VAE + Init|
|CNN-VAE + Init|
|VAE + Init|
|VAE + Word-Drop 25%|
|VAE + Word-Drop 50%|
|VAE + SVI ()|
|VAE + SVI ()|
|VAE + SVI + KL ()|
|VAE + SVI + KL ()|
Results from the various models are shown in Table 2. Our baseline models (LM/VAE/SVI in Table 2) are already quite strong and outperform the models considered in Yang et al. (2017). However models trained with VAE/SVI make negligible use of the latent variable and practically collapse to a language model, negating the benefits of using latent variables.888Models trained with word dropout (+ Word-Drop in Table 2) do make use of the latent space but significantly underperform a language model. In contrast, models that combine amortized inference with iterative refinement make use of the latent space and the KL term is significantly above zero.999A high KL term does not necessarily imply that the latent variable is being utilized in a meaningful way (it could simply be due to bad optimization). In Section 5.1 we investigate the learned latent space in more detail. VAE + SVI and VAE + SVI + KL do not outperform a language model, and while SA-VAE only modestly outperforms it, to our knowledge this is one of the first instances in which we are able to train an LSTM generative model that does not ignore the latent code and outperforms a language model.
One might wonder if the improvements are coming from simply having a more flexible inference scheme at test time, rather than from learning a better generative model. To test this, for the various models we discard the inference network at test time and perform SVI for a variable number of steps from random initialization. The results are shown in Figure 2 (left). It is clear that the learned generative model (and the associated ELBO landscape) is quite different—it is not possible to train with VAE and perform SVI at test time to obtain the same performance as SA-VAE (although the performance of VAE does improve slightly from 62.7 to 62.3 when we run SVI for 40 steps from random initialization).
Figure 2 (right) has the results for a similar experiment where we refine the variational parameters initialized from the inference network for a variable number of steps at test time. We find that the inference network provides better initial parameters than random initialization and thus requires fewer iterations of SVI to reach the optimum. We do not observe improvements for running more refinement steps than was used in training at test time. Interestingly, SA-VAE without any refinement steps at test time has a substantially nonzero KL term (KL = 6.65, PPL = 62.0). This indicates that the posterior-collapse phenomenon when training LSTM-based VAEs for text is partially due to optimization issues. Finally, while Yang et al. (2017) found that initializing the encoder with a pretrained language model improved performance (+ Init in Table 2), we did not observe this on our baseline VAE model when we trained with SGD and hence did not pursue this further.
|IWAE (Burda et al., 2015a)|
|Ladder VAE (Sønderby et al., 2016)|
|RBM (Burda et al., 2015b)|
|Discrete VAE (Rolfe, 2017)|
|DRAW (Gregor et al., 2015)|
|Conv DRAW (Gregor et al., 2016)|
|VLAE (Chen et al., 2017)|
|VampPrior (Tomczak & Welling, 2018)|
|VAE + SVI ()|
|VAE + SVI ()|
|VAE + SVI + KL ()|
|VAE + SVI + KL ()|
We next apply our approach to model images on the OMNIGLOT dataset (Lake et al., 2015).101010We focus on the more complex OMNIGLOT dataset instead of the simpler MNIST dataset as prior work has shown that the amortization gap on MNIST is minimal (Cremer et al., 2018). While posterior collapse is less of an issue for VAEs trained on images, we still expect that improving the amortization gap would result in generative models that better model the underlying data and make more use of the latent space. We use a three-layer ResNet (He et al., 2016) as our inference network. The generative model first transforms the 32-dimensional latent vector to the image spatial resolution, which is concatenated with the original image and fed to a 12-layer Gated PixelCNN (van den Oord et al., 2016) with varying filter sizes, followed by a final sigmoid layer. We employ the same KL-cost annealing schedule as in the text experiments. See Appendix B.3 for the exact architecture/hyperparameters.
Results from the various models are shown in Table 3. Our findings are largely consistent with results from text: the semi-amortized approaches outperform VAE/SVI baselines, and further they learn generative models that make more use of the latent representations (i.e. KL portion of the loss is higher). Even with 80 steps of SVI we are unable to perform as well as SA-VAE trained with 10 refinement steps, indicating the importance of good initial parameters provided by the inference network. In Appendix C we further investigate the performance of VAE and SA-VAE as we vary the training set size and the capacity of the inference network/generative model. We find that SA-VAE outperforms VAE and has higher latent variable usage in all scenarios. We note that we do not outperform the state-of-the-art models that employ hierarchical latent variables and/or more sophisticated priors (Chen et al., 2017; Tomczak & Welling, 2018). However these additions are largely orthogonal to our approach and we hypothesize they will also benefit from combining amortized inference with iterative refinement.111111Indeed, Cremer et al. (2018) observe that the amortization gap can be substantial for VAE trained with richer variational families.
For the text model we investigate what the latent variables are learning through saliency analysis with our best model (SA-VAE trained with 20 steps). Specifically, we calculate the output saliency of each token with respect to as
where is the norm and the expectation is approximated with 5 samples from the variational posterior. Saliency is therefore a measure of how much the latent variable is being used to predict a particular token.
|where can i buy an affordable stationary bike ? try this place , they have every type imaginable with prices to match . http : UNK /s|
|if our economy collapses , will canada let all of us cross their border ? no , a country would have to be stupid to let that many people cross their borders and drain their resources . /s|
|does the flat earth society still exist ? i ’m curious to know whether the original society still exists . i ’m not especially interested in discussion about whether the earth is flat or round . although there is no currently active website for the society , someone ( apparently a relative of samuel UNK ) maintains the flat earth society forums . this website , which offers a discussion forum and an on-line archive of flat earth society UNK from the 1970s and 1980s , represents a serious attempt to UNK the original flat earth society . /s|
|s where can i buy an affordable stationary bike ? try this place , they have every type imaginable with prices to match . http : UNK /s|
|where can i find a good UNK book for my daughter ? i am looking for a website that sells christmas gifts for the UNK . thanks ! UNK UNK /s|
|where can i find a good place to rent a UNK ? i have a few UNK in the area , but i ’m not sure how to find them . http : UNK /s|
|s which country is the best at soccer ? brazil or germany . /s|
|who is the best soccer player in the world ? i think he is the best player in the world . ronaldinho is the best player in the world . he is a great player . /s|
|will ghana be able to play the next game in 2010 fifa world cup ? yes , they will win it all . /s|
We visualize the saliency of a few examples from the test set in Figure 3 (top). Each example consists of a question followed by an answer from the Yahoo corpus. From a qualitative analysis several things are apparent: the latent variable seems to encode question type (i.e. if, what, how, why, etc.) and therefore saliency is high for the first word; content words (nouns, adjectives, lexical verbs) have much higher saliency than function words (determiners, prepositions, conjunctions, etc.); saliency of the /s token is quite high, indicating that the length information is also encoded in the latent space. In the third example we observe that the left parenthesis has higher saliency than the right parenthesis (0.32 vs. 0.24 on average across the test set), as the latter can be predicted by conditioning on the former rather than on the latent representation .
The previous definition of saliency measures the influence of on the output . We can also roughly measure the influence of the input on the latent representation , which we refer to as input saliency:
Here is the encoder word embedding for .121212As the norm of is a rather crude measure, a better measure would be obtained by analyzing the spectra of the Jacobian . However this is computationally too expensive to calculate for each token in the corpus. We visualize the input saliency for a test example (Figure 3, middle) and a made-up example (Figure 3, bottom). Under each input example we also visualize a two samples from the variational posterior, and find that the generated examples are often meaningfully related to the input example.131313We first sample then . When sampling we sample with temperature , i.e. where is the vector with scores for all words. We found the generated examples to be related to the original (in some way) in roughly half the cases.
We quantitatively analyze output saliency across part-of-speech, token position, word frequency, and log-likelihood in Figure 4: nouns (NN), adjectives (JJ), verbs (VB), numbers (CD), and the /s token have higher saliency than conjunctions (CC), determiners (DT), prepositions (IN), and the TO token—the latter are relatively easier to predict by conditioning on previous tokens; similarly, on average, tokens occurring earlier have much higher saliency than those occurring later (Figure 4 shows absolute position but the plot is similar with relative position); the latent variable is used much more when predicting rare tokens; there is some negative correlation between saliency and log-likelihood (-0.51), though this relationship does not always hold—e.g. /s has high saliency but is relatively easy to predict with an average log-likelihood of -1.61 (vs. average log-likelihood of -4.10 for all tokens). Appendix D has the corresponding analysis for input saliency, which are qualitatively similar.
These results seem to suggest that the latent variables are encoding interesting and potentially interpretable aspects of language. While left as future work, it is possible that manipulations in the latent space of a model learned this way could lead to controlled generation/manipulation of output text (Hu et al., 2017; Mueller et al., 2017).
A drawback of our approach (and other non-amortized inference methods) is that each training step requires backpropagating through the generative model multiple times, which can be costly especially if the generative model is expensive to compute (e.g. LSTM/PixelCNN). This may potentially be mitigated through more sophisticated meta learning approaches (Andrychowicz et al., 2016; Marino et al., 2018), or with more efficient use of the past gradient information during SVI via averaging (Schmidt et al., 2013) or importance sampling (Sakaya & Klami, 2017). One could also consider employing synthetic gradients (Jaderberg et al., 2017) to limit the number of backpropagation steps during training. Krishnan et al. (2018) observe that it is more important to train with iterative refinement during earlier stages (we also observed this in preliminary experiments), and therefore annealing the number of refinement steps as training progresses could also speed up training.
Our approach is mainly applicable to variational families that avail themselves to differentiable optimization (e.g. gradient ascent) with respect to the ELBO, which include much recent work on employing more flexible variational families with VAEs. In contrast, VAE + SVI and VAE + SVI + KL are applicable to more general optimization algorithms.
Our work is most closely related the line of work which uses a separate model to initialize variational parameters and subsequently updates them through an iterative procedure (Salakhutdinov & Larochelle, 2010; Cho et al., 2013; Salimans et al., 2015; Hjelm et al., 2016; Krishnan et al., 2018; Pu et al., 2017). Marino et al. (2018) utilize meta-learning to train an inference network which learns to perform iterative inference by training a deep model to output the variational parameters for each time step.
While differentiating through inference/optimization was initially explored by various researchers primarily outside the area of deep learning(Stoyanov et al., 2011; Domke, 2012; Brakel et al., 2013), they have more recently been explored in the context of hyperparameter optimization (Maclaurin et al., 2015) and as a differentiable layer of a deep model (Belanger et al., 2017; Kim et al., 2017; Metz et al., 2017; Amos & Kolter, 2017).
Initial work on VAE-based approaches to image modeling focused on simple generative models that assumed independence among pixels conditioned on the latent variable (Kingma & Welling, 2014; Rezende et al., 2014)
. More recent works have obtained substantial improvements in log-likelihood and sample quality through utilizing powerful autoregressive models (PixelCNN) as the generative model(Chen et al., 2017; Gulrajani et al., 2017).
In contrast, modeling text with VAEs has remained challenging. Bowman et al. (2016) found that using an LSTM generative model resulted in a degenerate case whereby the variational posterior collapsed to the prior and the generative model ignored the latent code (even with richer variational families). Many works on VAEs for text have thus made simplifying conditional independence assumptions (Miao et al., 2016, 2017), used less powerful generative models such as convolutional networks (Yang et al., 2017; Semeniuta et al., 2017), or combined a recurrent generative model with a topic model (Dieng et al., 2017; Wang et al., 2018). Note that unlike to sequential VAEs that employ different latent variables at each time step (Chung et al., 2015; Fraccaro et al., 2016; Krishnan et al., 2017; Serban et al., 2017; Goyal et al., 2017a), in this work we focus on modeling the entire sequence with a global latent variable.
Finally, since our work only addresses the amortization gap (the gap between the log-likelihood and the ELBO due to amortization) and not the approximation gap (due to the choice of a particular variational family) (Cremer et al., 2018), it can be combined with existing work on employing richer posterior/prior distributions within the VAE framework (Rezende & Mohamed, 2015; Kingma et al., 2016; Johnson et al., 2016; Tran et al., 2016; Goyal et al., 2017b; Guu et al., 2017; Tomczak & Welling, 2018).
This work outlines semi-amortized variational autoencoders, which combine amortized inference with local iterative refinement to train deep generative models of text and images. With the approach we find that we are able to train deep latent variable models of text with an expressive autogressive generative model that does not ignore the latent code.
From the perspective of learning latent representations, one might question the prudence of using an autoregressive model that fully conditions on its entire history (as opposed to assuming some conditional independence) given that can always be factorized as , and therefore the model is non-identifiable (i.e. it does not have to utilize the latent variable). However in finite data regimes we might still expect a model that makes use of its latent variable to generalize better due to potentially better inductive bias (from the latent variable). Training generative models that both model the underlying data well and learn good latent representations is an important avenue for future work.
We thank Rahul Krishnan, Rachit Singh, and Justin Chiu for insightful comments/discussion. We additionally thank Zichao Yang for providing the text dataset. YK and AM are supported by Samsung Research. SW is supported by an Amazon AWS ML Award.
OptNet: Differentiable Optimization as a Layer in Neural Networks.In Proceedings of ICML, 2017.
A Two-Stage Pretraining Algorithm for Deep Boltzmann Machines.In Proceedings of ICANN, 2013.
TopicRNN: A Recurrent Neural Network With Long-Range Semantic Dependency.In Proceedings of ICLR, 2017.
On the Challenges of Learning with Inference Networks on Sparse, High-dimensional Data.In Proceedings of AISTATS, 2018.
Automatic Learning Rate Maximization by On-line Estimation of the Hessian’s Eigenvectors.In Proceedings of NIPS, 1993.
For stable training we found it crucial to modify Algorithm 1 to clip the gradients at various stages. This is shown in Algorithm 2, where we have a clipping parameter . The function is given by
We use in all experiments. The finite difference estimation itself also uses gradient clipping. See https://github.com/harvardnlp/sa-vae/blob/master/optim_n2n.py for the exact implementation.
For all the variational models we use a spherical Gaussian prior. The variational family is the diagonal Gaussian parameterized by the vector of means and log variances. For models trained with SVI the initial variational parameters are randomly initialized from a Gaussian with standard deviation equal to 0.1.
We generate synthetic data points according to the following generative process:
Here LSTM is a one-layer LSTM with 100 hidden units where the input embedding is also 100-dimensional. The initial hidden/cell states are set to zero, and we generate for 5 time steps for each example (so ). The MLP consists of a single affine transformation to project out to the vocabulary space, which has 1000 tokens. LSTM/MLP parameters are randomly initialized with , except for the part of the MLP that directly connects to the latent variables, which is initialized with . This is done to make sure that the latent variables have more influence in predicting . We generate 5000 training/validation/test examples.
When we learn the generative model the LSTM is initialized over . The inference network is also a one-layer LSTM with 100-dimensional hidden units/input embeddings, where the variational parameters are predicted via an affine transformation on the final hidden state of the encoder. All models are trained with stochastic gradient descent with batch size 50, learning rate 1.0, and gradient clipping at 5. The learning rate starts decaying by a factor of 2 each epoch after the first epoch at which validation performance does not improve. This learning rate decay is not triggered for the first 5 epochs. We train for 20 epochs, which was enough for convergence of all models. For SVI/SA-VAE we perform 20 steps of iterative inference with stochastic gradient descent and learning rate 1.0 with gradient clipping at 5.
|Inference Network||3-layer ResNet||2-layer MLP|
|Data Size: 25%|
|Data Size: 50%|
|Data Size: 75%|
|Data Size: 100%|
We use the same model architecture as was used in Yang et al. (2017)
. The inference network and the generative model are both one-layer LSTMs with 1024-dimensional hidden states where the input word embedding is 512-dimensional. We use the final hidden state of the encoder to predict (via an affine transformation) the vector of variational means and log variances. The latent space is 32-dimensional. The sample from the variational posterior is used to initialize the initial hidden state of the generative LSTM (but not the cell state) via an affine transformation, and additionally fed as input (i.e. concatenated with the word embedding) at each time step. There are dropout layers with probability 0.5 between the input-to-hidden layer and the hidden-to-output layer on the generative LSTM only.
The data contains 100000/10000/10000 train/validation/test examples with 20000 words in the vocabulary. All models are trained with stochastic gradient descent with batch size 32 and learning rate 1.0, where the learning rate starts decaying by a factor of 2 each epoch after the first epoch at which validation performance does not improve. This learning rate decay is not triggered for the first 15 epochs to ensure adequate training. We train for 30 epochs or until the learning rate has decayed 5 times, which was enough for convergence for all models. Model parameters are initialized over and gradients are clipped at 5. We employ a KL-cost annealing schedule whereby the multiplier on the KL-cost term is increased linearly from 0.1 to 1.0 each batch over 10 epochs. For models trained with iterative inference we perform SVI via stochastic gradient descent with momentum 0.5 and learning rate 1.0. Gradients are clipped after each step of SVI (also at 5).
The preprocessed OMNIGLOT dataset does not have a standard validation split so we randomly pick 2000 images from training as validation. As with previous works the pixel value is scaled to be between 0 and 1 and interpreted as probabilities, and the images are dynamically binarized during training.
Our inference network consists of 3 residual blocks where each block is made up of a standard residual layer (i.e. two convolutional layers with
filters, ReLU nonlinearities, batch normalization, and residual connections) followed by a downsampling convolutional layer with filter size and stride equal to 2. These layers have 64 feature maps. The output of residual network is flattened and then used to obtain the variational means/log variances via an affine transformation.
The sample from the variational distribution (which is 32-dimensional) is first projected out to the image spatial resolution with 4 feature maps (i.e.
) via a linear transformation, then concatenated with the original image, and finally fed as input to a 12-layer Gated PixelCNN(van den Oord et al., 2016). The PixelCNN has three layers, followed by three layers, then three layers, and finally three layers. All the layers have 32 feature maps, and there is a final
convolutional layer followed by a sigmoid nonlinearity to produce a distribution over binary output. The layers are appropriately masked to ensure that the distribution over each pixel is conditioned only on the pixels left/top of it. We train with Adam with learning rate 0.001,= 0.9, = 0.999 for 100 epochs with batch size of 50. Gradients are clipped at 5.
For models trained with iterative inference we perform SVI via stochastic gradient descent with momentum 0.5 and learning rate 1.0, with gradient clipping (also at 5).
In Table 4 we investigate the performance of VAE/SA-VAE as we vary the capacity of the inference network, size of the training set, and the capacity of the generative model. The MLP inference network has two ReLU layers with 128 hidden units. For varying the PixelCNN generative model, we sequentially remove layers from our baseline 12-layer model starting from the bottom (so the 9-layer PixelCNN has three layers, three layers, three layers, all with 32 feature maps).
Intuitively, we expect iterative inference to help more when the inference network and the generative model are less powerful, and we indeed see this in Table 4. Further, one might expect SA-VAE to be more helpful in small-data regimes as it is harder for the inference network amortize inference and generalize well to unseen data. However we find that SA-VAE outperforms VAE by a similar margin across all training set sizes.
Finally, we observe that across all scenarios the KL portion of the loss is much higher for models trained with SA-VAE, indicating that these models are learning generative models that make more use of the latent representations.
In Figure 5 we show the input saliency by part-of-speech tag (left), position (center), and frequency (right). Input saliency of a token is defined as:
Here is the encoder word embedding for . Part-of-speech tagging is done using NLTK.