1 Introduction
Variational autoencoders (VAEs) Kingma and Welling (2013); Rezende et al. (2014) have been applied in many NLP tasks, including language modeling Bowman et al. (2015); Miao et al. (2016), dialog response generation Zhao et al. (2017); Wen et al. (2017), semisupervised text classification Xu et al. (2017)
, controllable text generation
Hu et al. (2017), and text compression Miao and Blunsom (2016). A prominent component of a VAE is the distributionbased latent representation for text sequence observations. This flexible representation allows the VAE to explicitly model holistic properties of sentences, such as style, topic, and highlevel linguistic and semantic features. Samples from the prior latent distribution can produce diverse and wellformed sentences through simple deterministic decoding Bowman et al. (2015).Due to the sequential nature of text, an autoregressive decoder is typically employed in the VAE. This is often implemented with a recurrent neural network (RNN); the long shortterm memory (LSTM)
Hochreiter and Schmidhuber (1997) RNN is used widely. This introduces one notorious issue when a VAE is trained using traditional methods: the decoder ignores the latent variable, yielding what is termed the KL vanishing problem.Several attempts have been made to ameliorate this issue Yang et al. (2017); Dieng et al. (2018); Zhao et al. (2017); Kim et al. (2018). Among them, perhaps the simplest solution is monotonic KL annealing, where the weight of the KL penalty term is scheduled to gradually increase during training Bowman et al. (2015). While these techniques can effectively alleviate the KLvanishing issue, a proper unified theoretical interpretation is still lacking, even for the simple annealing scheme.
In this paper, we analyze the variable dependency in a VAE, and point out that the autoregressive decoder has two paths (formally defined in Section 3.1) that work together to generate text sequences. One path is conditioned on the latent codes, and the other path is conditioned on previously generated words. KL vanishing happens because the first path can easily get blocked, due to the lack of good latent codes at the beginning of decoder training; the easiest solution that an expressive decoder can learn is to ignore the latent code, and relies on the other path only for decoding. To remedy this issue, a promising approach is to remove the blockage in the first path, and feed meaningful latent codes in training the decoder, so that the decoder can easily adopt them to generate controllable observations Bowman et al. (2015).
This paper makes the following contributions: We provide a novel explanation for the KLvanishing issue, and develop an understanding of the strengths and weaknesses of existing scheduling methods (, constant or monotonic annealing schedules). Based on our explanation, we propose a cyclical annealing schedule. It repeats the annealing process multiple times, and can be considered as an inexpensive approach to leveraging good latent codes learned in the previous cycle, as a warm restart, to train the decoder in the next cycle. We demonstrate that the proposed cyclical annealing schedule for VAE training improves performance on a large range of tasks (with negligible extra computational cost), including text modeling, dialog response generation, and unsupervised language pretraining.
2 Preliminaries
2.1 The VAE model
To generate a text sequence of length , , neural language models Mikolov et al. (2010) generate every token conditioned on the previously generated tokens:
where indicates all tokens before .
The VAE model for text consists of two parts, generation and inference Kingma and Welling (2013); Rezende et al. (2014); Bowman et al. (2015). The generative model (decoder
) draws a continuous latent vector
from prior , and generates the text sequence from a conditional distribution ; is typically assumed a multivariate Gaussian, and represents the neural network parameters. The following autoregressive decoding process is usually used:(1) 
Parameters are typically learned by maximizing the marginal log likelihood . However, this marginal term is intractable to compute for many decoder choices. Thus, variational inference is considered, and the true posterior is approximated via the variational distribution is (often known as the inference model or encoder), implemented via a parameterized neural network. It yields the evidence lower bound (ELBO) as an objective:
(2)  
Typically,
is modeled as a Gaussian distribution, and the reparametrization trick is used for efficient learning
Kingma and Welling (2013).2.2 Training Schedules and KL Vanishing
There is an alternative interpretation of the ELBO: the VAE objective can be viewed as a regularized version of the autoencoder (AE) Goodfellow et al. (2016). It is thus natural to extend the negative of in (2) by introducing a hyperparameter to control the strength of regularization:
(3)  
(4)  
(5) 
where is the reconstruction error (or negative loglikelihood (NLL)), and is a KL regularizer.
The cost function provides a unified perspective for understanding various autoencoder variants and training methods. When , we recover the VAE in (2). When , and
is a delta distribution, we recover the AE. In other words, the AE does not regularize the variational distribution toward a prior distribution, and there is only a pointestimate to represent the text sequence’s latent feature. In practice, it has been found that learning with an AE is prone to overfitting
Bowman et al. (2015), or generating plain dialog responses Zhao et al. (2017). Hence, it is desirable to retain meaningful posteriors in real applications. Two different schedules for have been commonly used for a text VAE.Constant Schedule
The standard approach is to keep fixed during the entire training procedure, as it corresponds to optimizing the true VAE objective. Unfortunately, instability on text analysis has been witnessed, in that the KL term becomes vanishingly small during training Bowman et al. (2015). This issue causes two undesirable outcomes: an encoder that produces posteriors almost identical to the Gaussian prior, for all observations (rather than a more interesting posterior); and a decoder that completely ignores the latent variable , and a learned model that reduces to a simpler language model. This is known as the KL vanishing issue in text VAEs.
Monotonic Annealing Schedule.
A simple remedy has been proposed in Bowman et al. (2015) to alleviate KL collapse. It sets at the beginning of training, and gradually increases until is reached. In this setting, we do not optimize the proper lower bound in (2) during the early stages of training, but nonetheless improvements on the value of that bound are observed at convergence in previous work Bowman et al. (2015); Zhao et al. (2017).
The monotonic annealing schedule has become the de facto
standard in training text VAEs, and has been widely adopted in many NLP tasks. Though simple and often effective, this heuristic still lacks a proper justification. Further, how to best schedule
is largely unexplored.(a) Traditional VAE 
(b) VAE with an autoregressive decoder 
3 Cyclical Annealing Schedule
3.1 Identifying Sources of KL Vanishing
In the traditional VAE Kingma and Welling (2013), generates directly, and the reconstruction depends only on one path of passing through , as shown in Figure 1(a). Hence, can largely determine the reconstructed . In contrast, when an autoregressive decoder is used in a text VAE Bowman et al. (2015), there are two paths from to its reconstruction, as shown in Figure 1(b). Path A is the same as that in the standard VAE, where is the global representation that controls the generation of ; Path B leaks the partial groundtruth information of at every time step of the sequential decoding. It generates conditioned on . Therefore, Path B can potentially bypass Path A to generate , leading to KL vanishing.
From this perspective, we hypothesize that the modelcollapse problem is related to the low quality of at the beginning phase of decoder training. A lower quality introduces more difficulties in reconstructing via Path A. As a result, the model is forced to learn an easier solution to decoding: generating via Path B only.
We argue that this phenomenon can be easily observed due to the powerful representation capability of the autoregressive decoder. It has been shown empirically that autoregressive decoders are able to capture highlycomplex distributions, such as natural language sentences Mikolov et al. (2010). This means that Path B alone has enough capacity to model , even though the decoder takes as input to produce . Zhang et al. (2017) has shown that flexible deep neural networks can easily fit randomly labeled training data, and here the decoder can learn to rely solely on for generation, when is of low quality.
We use our hypothesis to explain the learning behavior of different scheduling schemes for as follows.
Constant Schedule
The two loss terms in (2) are weighted equally in the constant schedule. At the early stage of optimization, are randomly initialized and the latent codes are of low quality. The KL term pushes close to an uninformative prior : the posterior becomes more like an isotropic Gaussian noise, and less representative of their corresponding observations. In other words, blocks Path A, and thus remains uninformative during the entire training process: it starts with random initialization and then is regularized towards a random noise. Although the reconstruction term can be satisfied via two paths, since is noisy, the decoder learns to discard Path A (i.e., ignores ), and chooses Path B to generate the sentence wordbyword.
Monotonic Annealing Schedule
The monotonic schedule sets close to in the early stage of training, which effectively removes the blockage
on Path A, and the model reduces to a denoising autoencoder
^{1}^{1}1The Gaussian sampling remains for . becomes the only objective, which can be reached by both paths. Though randomly initialized, is learned to capture useful information for reconstruction of during training. At the time when the full VAE objective is considered (), learned earlier can be viewed as the VAE initialization; such latent variables are much more informative than random, and thus are ready for the decoder to use.To mitigate the KLvanishing issue, it is key to have meaningful latent codes at the beginning of training the decoder, so that can be utilized. The monotonic schedule underweights the prior regularization, and the learned tends to collapse into a point estimate (i.e., the VAE reduces to an AE). This underestimate can result in suboptimal decoder learning. A natural question concerns how one can get a better distribution estimate for as initialization, while retaining low computational cost.
3.2 Cyclical Annealing Schedule
Our proposal is to use , which has been trained under the full VAE objective, as initialization. To learn to progressively improve latent representation , we propose a cyclic annealing schedule. We start with , increase at a fast pace, and then stay at for subsequent learning iterations. This encourages the model to converge towards the VAE objective, and infers its first raw full latent distribution.
Unfortunately, Path A is blocked at . The optimization is then continued at again, which perturbs the VAE objective, dislodges it from the convergence, and reopens Path A. Importantly, the decoder is now trained with the latent code from a full distribution , and both paths are considered. We repeat this process several times to achieve better convergences.
Formally, has the form:
(6)  
(7) 
where is the iteration number, is the total number of training iterations, is a monotonically increasing function, and we introduce two new hyperparameters associated with the cyclical annealing schedule:

: number of cycles (default );

: proportion used to increase within a cycle (default ).
In other words, we split the training process into cycles, each starting with and ending with . We provide an example of a cyclical schedule in Figure 2(b), compared with the monotonic schedule in Figure 2(a). Within one cycle, there are two consecutive stages (divided by ):

Annealing. is annealed from to in the first training steps over the course of a cycle. For example, the steps in the Figure 2(b). forces the model to learn representative to reconstruct . As depicted in Figure 1(b), there is no interruption from the prior on Path A, is forced to learn the global representation of . By gradually increasing towards , is regularized to transit from a point estimate to a distribution estimate, spreading out to match the prior.

Fixing. As our ultimate goal is to learn a VAE model, we fix for the rest of training steps within one cycle, e.g., the steps in Figure 2(b). This drives the model to optimize the full VAE objective until convergence.
As illustrated in Figure 2, the monotonic schedule increasingly anneals from 0 to 1 once, and fixes during the rest of training. The cyclical schedules alternatively repeats the annealing and fixing stages multiple times.
A Practical Recipe
The existing schedules can be viewed as special cases of the proposed cyclical schedule. The cyclical schedule reduces to the constant schedule when , and it reduces to an monotonic schedule when and is relatively small ^{2}^{2}2In practice, the monotonic schedule usually anneals in a very fast pace, thus is small compared with the entire training procedure.. In theory, any monotonically increasing function can be adopted for the cyclical schedule, as long as and . In practice, we suggest to build the cyclical schedule upon the success of monotonic schedules: we adopt the same , and modify it by setting and (as default). Three widely used increasing functions for are linear Fraccaro et al. (2016); Goyal et al. (2017), Sigmoid Bowman et al. (2015) and Consine Lai et al. (2018). We present the comparative results using the linear function in Figure 2, and show the complete comparison for other functions in Figure 7 of the Supplementary Material (SM).
3.3 On the impact of
This section derives a bound for the training objective to rigorously study the impact of ; the proof details are included in SM. For notational convenience, we identify each data sample with a unique integer index
, drawn from a uniform random variable on
. Further we define and . Following Makhzani et al. (2016), we refer to as the aggregated posterior. This marginal distribution captures the aggregated over the entire dataset. The KL term in (5) can be decomposed into two refined terms Chen et al. (2018); Hoffman and Johnson (2016):(8) 
where is the mutual information (MI) measured by . Higher MI can lead to a higher correlation between the latent variable and data variable, and encourages a reduction in the degree of KL vanishing. The marginal KL is represented by , and it measures the fitness of the aggregated posterior to the prior distribution.
The reconstruction term in (5) provides a lower bound for MI measured by , based on Corollary 3 in Li et al. (2017):
(9) 
where is a constant.
Analysis of
When scheduled with , the training objective over the dataset can be written as:
(10)  
(11) 
To reduce KL vanishing, we desire an increase in the MI term , which appears in both and , modulated by . It shows that reducing KL vanishing is inversely proportional with . When , the model fully focuses on maximizing the MI. As increases, the model gradually transits towards fitting the aggregated latent codes to the given prior. When , the implementation of MI becomes implicit in . It is determined by the amortized inference regularization (implied by the encoder’s expressivity) Shu et al. (2018), which further affects the performance of the generative density estimator.
(a) ELBO  (b) Reconstruction Error  (c) KL term 
4 Visualization of Latent Space
We compare different schedule methods by visualizing the learning processes on an illustrative problem. Consider a dataset consisting of 10 sequences, each of which is a 10dimensional onehot vector with the value 1 appearing in different positions. A 2dimensional latent space is used for the convenience of visualization. Both the encoder and decoder are implemented using a 2layer LSTM with 64 hidden units each. We use K total iterations, and the scheduling schemes in Figure 2.
The learning curves for the ELBO, reconstruction error, and KL term are shown in Figure 3. The three schedules share very similar values. However, the cyclical schedule provides substantially lower reconstruction error and higher KL divergence. Interestingly, the cyclical schedule improves the performance progressively: it becomes better than the previous cycle, and there are clear periodic patterns across different cycles. This suggests that the cyclical schedule allows the model to use the previously learned results as a warmrestart to achieve further improvement.
We visualize the resulting division of the latent space for different training steps in Figure 4, where each color corresponds to , for . We observe that the constant schedule produces heavily mixed latent codes for different sequences throughout the entire training process. The monotonic schedule starts with a mixed , but soon divides the space into a mixture of 10 cluttered Gaussians in the annealing process (the division remains cluttered in the rest of training). The cyclical schedule behaves similarly to the monotonic schedule in the first 10K steps (the first cycle). But, starting from the 2nd cycle, much more divided clusters are shown when learning on top of the 1st cycle results. However, leads to some holes between different clusters, making violate the constraint of . This is alleviated at the end of the 2nd cycle, as the model is trained with . As the process repeats, we see clearer patterns in the 4th cycle than the 2nd cycle for both and . It shows that more structured information is captured in using the cyclical schedule, which is beneficial in downstream applications as shown in the experiments.
5 Related Work
Solutions to KL vanishing
Several techniques have been proposed to mitigate the KL vanishing issue. The proposed method is most closely related to the monotonic KL annealing technique in Bowman et al. (2015). In addition to introducing a specific algorithm, we have comprehensively studied the impact of and its scheduling schemes. Our explanations can be used to interpret other techniques, which can be broadly categorized into two classes.
The first category attempts to weaken Path B, and force the decoder to use Path A. Word drop decoding Bowman et al. (2015) sets a certain percentage of the target words to zero. It has shown that it may degrade the performance when the drop rate is too high. The dilated CNN was considered in Yang et al. (2017) as a new type of decoder to replace the LSTM. By changing the decoder’s dilation architecture, one can control Path B: the effective context from .
The second category of techniques improves the dependency in Path A, so that the decoder uses latent codes more easily. Skip connections were developed in Dieng et al. (2018) to shorten the paths from to in the decoder. Zhao et al. (2017) introduced an auxiliary loss that requires the decoder to predict the bagofwords in the dialog response Zhao et al. (2017). The decoder is thus forced to capture global information about the target response. Semiamortized training Kim et al. (2018) was proposed to perform stochastic variational inference (SVI) Hoffman et al. (2013) on top of the amortized inference in VAE. It shares a similar motivation with the proposed approach, in that better latent codes can reduce KL vanishing. However, the computational cost to run SVI is high, while our monotonic schedule does not require any additional compute overhead. The KL scheduling methods are complementary to these techniques. As shown in experiments, the proposed cyclical schedule can further improve them.
Vae
The VAE has been extended to regularized versions in a growing body of work Higgins et al. (2017); Alemi et al. (2018). Perhaps the seminal work is VAE Higgins et al. (2017), which was extended in Kim and Mnih (2018); Chen et al. (2018) to consider on the refined terms in the KL decomposition. Their primary goal is to learn disentangled latent representations to explain the data, by setting . From an informationtheoretic point of view, (Alemi et al., 2018) suggests a simple method to set to ensure that latentvariable models with powerful stochastic decoders do not ignore their latent code. However, results in an improper statistical model. Further, is static in their work; we consider dynamically scheduled and find it more effective.
Cyclical schedules
Warmrestart techniques are common in optimization to deal with multimodal functions. The cyclical schedule has been used to train deep neural networks Smith (2017)
, warm restart stochastic gradient descent
Loshchilov and Hutter (2017), improve convergence rates Smith and Topin (2017), obtain model ensembles Huang et al. (2017) and explore multimodal distributions in MCMC sampling Zhang et al. (2019). All these works applied cyclical schedules to the learning rate. In contrast, this paper represents the first to consider the cyclical schedule for in VAE. Though the techniques seem simple and similar, our motivation is different: we use the cyclical schedule to reopen Path A in Figure 1(b) and provide the opportunity to train the decoder with highquality .(a) ELBO  (b) Reconstruction error  (c) KL 
6 Experiments
The source code to reproduce the experimental results will be made publicly available on GitHub^{3}^{3}3https://github.com/haofuml/cyclical_annealing. For a fair comparison, we follow the practical recipe described in Section 3.2, where the monotonic schedule is treated as a special case of cyclical schedule (while keeping all other settings the same). The default hyperparameters of the cyclical schedule are used in all cases unless stated otherwise. We study the impact of hyperparameters in the SM, and show that larger can provide higher performance for various . We show the major results in this section, and put more details in the SM. The monotonic and cyclical schedules are denoted as M and C, respectively.
6.1 Language Modeling
We first consider language modeling on the Penn Tree Bank (PTB) dataset Marcus et al. (1993). Language modeling with VAEs has been a challenging problem, and few approaches have been shown to produce rich generative models that do not collapse to standard language models. Ideally a deep generative model trained with variational inference would pursue higher ELBO, making use of the latent space (i.e., maintain a nonzero KL term) while accurately modeling the underlying distribution (i.e., lower reconstruction errors). We implemented different schedules based on the code^{4}^{4}4https://github.com/harvardnlp/savae published by Kim et al. (2018).
The latent variable is 32dimensional, and 40 epochs are used. We compare the proposed cyclical annealing schedule with the monotonic schedule baseline that, following
Bowman et al. (2015), anneals linearly from 0 to 1.0 over 10 epochs. We also compare with semiamortized (SA) training Kim et al. (2018), which is considered as the stateoftheart technique in preventing KL vanishing. We set SVI steps to 10.Results are shown in Table 1. The perplexity is reported in column PPL. The cyclical schedule outperforms the monotonic schedule for both standard VAE and SAVAE training. SAVAE training can effectively reduce KL vanishing, it takes 472s per epoch. However, this is significantly more expensive than the standard VAE training which takes 30s per epoch. The proposed cyclical schedule adds almost zero cost.
We show the learning curves for VAE and SAVAE in Figure 5. Interestingly, the cyclical schedule exhibits periodical learning behaviours. The performance of the cyclical schedule gets better progressively, after each cycle. While ELBO and PPL ar similar, the cyclical schedule improves the reconstruction ability and KL values for both VAE and SAVAE. We observe clear overfitting issues for the SAVAE with the monotonic schedule, while this issue is less severe for SAVAE with the cyclical schedule.
Finally, we further investigate whether our improvements are from simply having a lower , rather than from the cyclical schedule reopening Path A for better learning. To test this, we use a monotonic schedule with maximum . We observe that the reconstruction and KL terms perform better individually, but the ELBO is substantially worse than , because yields an improper model. Even so, the cyclical schedule improves its performance.
Model  CVAE  CVAE+BoW  
Schedule  M  C  M  C 
RecP  36.16  29.77  18.44  16.74 
KL Loss  0.265  4.104  14.06  15.55 
B4 prec  0.185  0.234  0.211  0.219 
B4 recall  0.122  0.220  0.210  0.219 
Abow prec  0.957  0.961  0.958  0.961 
Abow recall  0.911  0.941  0.938  0.940 
Ebow prec  0.867  0.833  0.830  0.828 
Ebow recall  0.784  0.808  0.808  0.805 
6.2 Conditional VAE for Dialog
We use a cyclical schedule to improve the latent codes in Zhao et al. (2017), which are key to diverse dialogresponse generation. Following Zhao et al. (2017), Switchboard (SW) Corpus Godfrey and Holliman (1997) is used, which has 2400 twosided telephone conversations.
Two latent variable models are considered. The first one is the Conditional VAE (CVAE), which has been shown better than the encoderdecoder neural dialog Serban et al. (2016). The second is to augment VAE with a bagofword (BoW) loss to tackle the KL vanishing problem, as proposed in Zhao et al. (2017).
Table 2 shows the sample outputs generated from the two schedules using CVAE. Caller begins with an openended statement on choosing a college, and the model learns to generate responses from Caller . The cyclical schedule generated highly diverse answers that cover multiple plausible dialog acts. On the contrary, the responses from the monotonic schedule are limited to repeat plain responses, i.e., “i’m not sure”.
Quantitative results are shown in Table 3
, using the evaluation metrics from
Zhao et al. (2017). Smoothed Sentencelevel BLEU Chen and Cherry (2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty. We use BLEU1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.
Cosine Distance of Bagofword Embedding Liu et al. (2016): a simple method to obtain sentence embeddings is to take the average or extreme of all the word embeddings in the sentences. We used Glove embedding and denote the average method as Abow and extreme method as Ebow. The score is normalized to . Higher values indicate more plausible responses.The BoW indeed reduces the KL vanishing issue, as indicated by the increased KL and decreased reconstruction perplexity. When applying the proposed cyclical schedule to CVAE, we also see a reduced KL vanishing issue. Interestingly, it also yields the highest BLEU scores. This suggests that the cyclical schedule can generate dialog responses of higher fidelity with lower cost, as the auxiliary BoW loss is not necessary. Further, BoW can be improved when integrated with the cyclical schedule, as shown in the last column of Table 3.
6.3 Unsupervised Language Pretraining
We consider the Yelp dataset, as preprocessed in Shen et al. (2017) for unsupervised language pretraining (ULPT). Text features are extracted as the latent codes of VAE models, pretrained with monotonic and cyclical schedules. The AE is used as the baseline. A good VAE can learn to cluster data into meaningful groups Kingma and Welling (2013), indicating that wellstructured are highly informative features, which usually leads to higher classification performance. To clearly compare the quality of
, we build a simple onelayer classifier on
, and finetune the model on different proportions of labelled data.The results are shown in Figure 6. The cyclical schedule consistently yields the highest accuracy relative to other methods. We visualize the tSNE embeddings Maaten and Hinton (2008) of in Figure 9 of the SM, and observe that the cyclical schedule exhibits clearer clustered patterns.
Schedule  Rec  KL  ELBO 

Cyc + Const  101.30  1.457  102.76 
Mon + Const  101.93  0.858  102.78 
Cyc + Cyc  100.61  1.897  102.51 
Mon + Cyc  101.74  0.748  102.49 
6.4 Ablation Study
To enhance the performance, we propose to apply the cyclical schedule to the learning rate on real tasks. It ensures that the optimizer has the same length of optimization trajectory for each cycle (so that each cycle can fully converge). To investigate the impact of cyclical on , we perform two more ablation experiments: We make only cyclical, keep constant. We make only cyclical, keep monotonic. The last epoch numbers are shown in Table 4, and the learning curves on shown in Figure 10 in SM. Compared with the baseline, we see that it is the cyclical rather than cyclical that contributes to the improved performance.
7 Conclusions
We provide a novel twopath interpretation to explain the KL vanishing issue, and identify its source as a lack of good latent codes at the beginning of decoder training. This provides an understanding of various scheduling schemes, and motivates the proposed cyclical schedule. By reopening the path at , the cyclical schedule can progressively improve the performance, by leveraging good latent codes learned in the previous cycles as warm restarts. We demonstrate the effectiveness of the proposed approach on three NLP tasks, and show that it is superior to or complementary to other techniques.
Acknowledgments
We thank Yizhe Zhang, Sungjin Lee, Dinghan Shen, Wenlin Wang for insightful discussion. The implementation in our experiments heavily depends on three NLP applications published on Github repositories; we acknowledge all the authors who made their code public, which tremendously accelerates our project progress.
References
 Alemi et al. (2018) Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. 2018. Fixing a broken ELBO. In ICML.
 Bowman et al. (2015) Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.
 Chen and Cherry (2014) Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentencelevel bleu. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 362–367.
 Chen et al. (2018) Ricky TQ Chen, Xuechen Li, Roger Grosse, and David Duvenaud. 2018. Isolating sources of disentanglement in VAEs. NIPS.
 Dieng et al. (2018) Adji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. 2018. Avoiding latent variable collapse with generative skip models. arXiv preprint arXiv:1807.04863.
 Fraccaro et al. (2016) Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, and Ole Winther. 2016. Sequential neural models with stochastic layers. In NIPS.
 Godfrey and Holliman (1997) J Godfrey and E Holliman. 1997. Switchboard1 release 2: Linguistic data consortium. SWITCHBOARD: A User’s Manual.
 Goodfellow et al. (2016) Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning, volume 1. MIT press Cambridge.
 Goyal et al. (2017) Anirudh Goyal Alias Parth Goyal, Alessandro Sordoni, MarcAlexandre Côté, Nan Rosemary Ke, and Yoshua Bengio. 2017. Zforcing: Training stochastic recurrent networks. In NIPS.
 Higgins et al. (2017) Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. betavae: Learning basic visual concepts with a constrained variational framework. ICLR.
 Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long shortterm memory. Neural computation.

Hoffman et al. (2013)
Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. 2013.
Stochastic variational inference.
The Journal of Machine Learning Research
. 
Hoffman and Johnson (2016)
Matthew D Hoffman and Matthew J Johnson. 2016.
Elbo surgery: yet another way to carve up the variational evidence
lower bound.
In
Workshop in Advances in Approximate Bayesian Inference, NIPS
.  Hu et al. (2017) Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. ICML.
 Huang et al. (2017) Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. 2017. Snapshot ensembles: Train 1, get m for free. ICLR.
 Kim and Mnih (2018) Hyunjik Kim and Andriy Mnih. 2018. Disentangling by factorising. ICML.
 Kim et al. (2018) Yoon Kim, Sam Wiseman, Andrew C Miller, David Sontag, and Alexander M Rush. 2018. Semiamortized variational autoencoders. ICML.
 Kingma and Welling (2013) Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. ICLR.
 Lai et al. (2018) Guokun Lai, Bohan Li, Guoqing Zheng, and Yiming Yang. 2018. Stochastic wavenet: A generative latent variable model for sequential data. ICML workshop.

Li et al. (2017)
Chunyuan Li, Hao Liu, Changyou Chen, Yuchen Pu, Liqun Chen, Ricardo Henao, and
Lawrence Carin. 2017.
ALICE: Towards understanding adversarial learning for joint distribution matching.
In NIPS.  Liu et al. (2016) ChiaWei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023.
 Loshchilov and Hutter (2017) Ilya Loshchilov and Frank Hutter. 2017. Sgdr: Stochastic gradient descent with warm restarts. ICLR.
 Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using tsne. Journal of machine learning research, 9(Nov):2579–2605.
 Makhzani et al. (2016) Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2016. Adversarial autoencoders. ICLR workshop.
 Marcus et al. (1993) Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics.
 Miao and Blunsom (2016) Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. EMNLP.
 Miao et al. (2016) Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In ICML.
 Mikolov et al. (2010) Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.

Rezende et al. (2014)
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014.
Stochastic backpropagation and approximate inference in deep generative models.
ICML.  Serban et al. (2016) Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building endtoend dialogue systems using generative hierarchical neural network models. In AAAI.
 Shen et al. (2017) Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from nonparallel text by crossalignment. In NIPS.
 Shu et al. (2018) Rui Shu, Hung H Bui, Shengjia Zhao, Mykel J Kochenderfer, and Stefano Ermon. 2018. Amortized inference regularization. NIPS.
 Smith (2017) Leslie N Smith. 2017. Cyclical learning rates for training neural networks. In WACV. IEEE.
 Smith and Topin (2017) Leslie N Smith and Nicholay Topin. 2017. Superconvergence: Very fast training of residual networks using large learning rates. arXiv preprint arXiv:1708.07120.
 Wen et al. (2017) TsungHsien Wen, Yishu Miao, Phil Blunsom, and Steve Young. 2017. Latent intention dialogue models. ICML.
 Xu et al. (2017) Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semisupervised text classification. In AAAI.
 Yang et al. (2017) Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor BergKirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. ICML.
 Zhang et al. (2017) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. ICLR.
 Zhang et al. (2019) Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, and Andrew Gordon Wilson. 2019. Cyclical stochastic gradient mcmc for bayesian deep learning. arXiv preprint arXiv:1902.03932.
 Zhao et al. (2017) Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourselevel diversity for neural dialog models using conditional variational autoencoders. ACL.
Appendix A Comparison of different schedules
We compare the two different scheduling schemes in Figure 7. The three widely used monotonic schedules are shown in the top row, including linear, sigmoid and cosine. We can easily turn them into their corresponding cyclical versions, shown in the bottom row.
Appendix B Proofs on the and MI
When scheduled with , the training objective over the dataset can be written as:
(12) 
We proceed the proof by rewriting each term separately.
b.1 Bound on
Following Li et al. (2017), on the support of , we denote
as the encoder probability measure, and
as the decoder probability measure. Note that the reconstruction loss for can be writen as its negative log likelihood form as:(13) 
Lemma 1
For random variables and with two different probability measures, and , we have
(14) 
where is the conditional entropy. Similarly, we can prove that
(15) 
From lemma 1, we have
Corollary 1
For random variables and with probability measure , the mutual information between and can be written as
(16) 
b.2 Decomposition of
Appendix C Model Description
c.1 Conditional VAE for dialog
Each conversation can be represented via three random variables: the dialog context composed of the dialog history, the response utterance , and a latent variable , which is used to capture the latent distribution over the valid responses() Zhao et al. (2017). The ELBO can be written as:
(18)  
c.2 Semisupervised learning with VAE
We use a simple factorization to derive the ELBO for semisupervised learning.
is introduced to regularize the strength of classification loss.(19)  
(20) 
where is the parameters for the classifier.
Good latent codes are crucial for the the classification performance, especially when simple classifiers are employed, or less labelled data is available.
(a) Linear  (b) Sigmoid  (c) Cosine 

Appendix D More Experimental Results
d.1 CVAE for Dialog Response Generation
Code Dataset
We implemented different schedules based on the code^{5}^{5}5https://github.com/snakeztc/NeuralDialogCVAE published by Zhao et al. (2017). In the SW dataset, there are 70 available topics. We randomly split the data into 2316/60/62 dialogs for train/validate/test.
Results
The results on full BLEU scores are shown in Table 5. The cyclical schedule outperforms the monotonic schedule in both settings. The learning curves are shown in Figure 8. Under similar ELBO results, the cyclical schedule provide lower reconstruction errors, higher KL values, and higher BLEU values than the monotonic schedule. Interestingly, the monotonic schedule tends to overfit, while the cyclical schedule does not, particularly on reconstruction errors. It means the monotonic schedule can learn better latent codes for VAEs, thus preventing overfitting.
d.2 Semisupervised Text Classification
Dataset
Yelp restaurant reviews dataset utilizes user ratings associated with each review. Reviews with rating above three are considered positive, and those below three are considered negative. Hence, this is a binary classification problem. The preprocessing in Shen et al. (2017)
allows sentiment analysis on sentence level. It further filters the sentences by eliminating those that exceed 15 words. The resulting dataset has 250K negative sentences, and 350K positive ones. The vocabulary size is 10K after replacing words occurring less than 5 times with the “
unk” token.Results
The tSNE embeddings are visualized in Figure 9. We see that cyclical provides much more separated latent structures than the other two methods.
d.3 Hyperparameter tuning
The cyclical schedule has two hyperparameters and . We provide the full results on and in Figure 11 and Figure 12, respectively. A larger number of cycles can provide higher performance for various proportion value .
Model  CVAE  CVAE+BoW  

Schedule  M  C  M  C 
B1 prec  0.326  0.423  0.384  0.397 
B1 recall  0.214  0.391  0.376  0.387 
B2 prec  0.278  0.354  0.320  0.331 
B2 recall  0.180  0.327  0.312  0.323 
B3 prec  0.237  0.299  0.269  0.279 
B3 recall  0.153  0.278  0.265  0.275 
B4 prec  0.185  0.234  0.211  0.219 
B4 recall  0.122  0.220  0.210  0.219 
(a) ELBO  (b) BLEU4 (F1) 
(c) Reconstruction error  (d) KL 
(a) Cyclical VAE  (b) Monotonic VAE  (c) AE 

(a) ELBO  (b) PPL 
(c) Reconstruction error  (d) KL 
Comments
There are no comments yet.