Variational Attention for Sequence-to-Sequence Models

12/21/2017 ∙ by Hareesh Bahuleyan, et al. ∙ University of Waterloo 0

The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network. In natural language processing, sequence-to-sequence (Seq2Seq) models typically serve as encoder-decoder networks. When combined with a traditional (deterministic) attention mechanism, the variational latent space may be bypassed by the attention model, making the generated sentences less diversified. In our paper, we propose a variational attention mechanism for VED, where the attention vector is modeled as normally distributed random variables. Experiments show that variational attention increases diversity while retaining high quality. We also show that the model is not sensitive to hyperparameters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/

The variational autoencoder (VAE), proposed by kingma2013auto,

encodes data to latent (random) variables, and then decodes the latent variables to reconstruct the input data. Theoretically, it optimizes a variational lower bound of the log-likelihood of the data. Compared with traditional variational methods such as mean-field approximation [Wainwright et al.2008]

, VAE leverages modern neural networks and hence is a more powerful density estimator. Compared with traditional autoencoders 

[Hinton and Salakhutdinov2006], which are deterministic

, VAE populates hidden representations to a region (instead of a single point), making it possible to generate diversified data from the vector space 

[Bowman et al.2016] or even control the generated samples [Hu et al.2017].

In natural language processing (NLP), recurrent neural networks (RNNs) are typically used as both the encoder and decoder, known as a sequence-to-sequence (Seq2Seq) model. Although variational Seq2Seq models are much trickier to train in comparison to the image domain, Vseq2seq succeed in training a sequence-to-sequence VAE and generating sentences from a continuous latent space. Such an architecture can further be extended to a variational encoder-decoder (VED) to transform one sequence into another with the “variational” property 

[Serban et al.2017, Zhou and Neubig2017].

When applying attention mechanisms [Bahdanau et al.2015] to variational Seq2Seq models, however, we find the generated sentences are of less variety, implying that the variational latent space is ineffective. The attention mechanism summarizes source information as an attention vector by weighted sum, where the weights are a learned probabilistic distribution; then the attention vector is fed to the decoder. Evidence shows that attention significantly improves Seq2Seq performance in translation [Bahdanau et al.2015], summarization [Rush et al.2015], etc. In variational Seq2Seq, however, the attention mechanism unfortunately serves as a “bypassing” mechanism. In other words, the variational latent space does not need to learn much, as long as the attention mechanism itself is powerful enough to capture source information.

In this paper, we propose a variational attention mechanism to address this problem. We model the attention vector as random variables by imposing a probabilistic distribution. We follow traditional VAE and model the prior of the attention vector by a Gaussian distribution, for which we further propose two plausible priors, whose mean is either a zero vector or an average of source hidden states.

We evaluate our approach on two experiments: question generation and dialog systems. Experiments show that the proposed variational attention yields a higher diversity than variational Seq2Seq with deterministic attention, while retaining high quality of generated sentences. In this way, we make VED work properly with the powerful attention mechanism.

In summary, the main contributions of this paper are two-fold: (1) We discover a “bypassing” phenomenon in VED, which could make the learning of variational space ineffective. (2) We propose a variational attention mechanism that models the attention vector as random variables to alleviate the above problem. To the best of our knowledge, we are the first to address the attention mechanism in variational encoder-decoder neural networks. Our model is a general framework, which can be applied for various text generation tasks.

2 Background and Motivation

In this section, we introduce the variational autoencoder and the attention mechanism. We also present a pilot experiment motivating our variational attention model.

2.1 Variational Autoencoder (VAE)

A VAE encodes data (e.g., a sentence) as hidden random variables , based on which the decoder reconstructs . Consider a generative model, parameterized by , as

(1)

Given a dataset , the likelihood of a data point is

(2)

VAE models both and with neural networks, parametrized by and , respectively. Figure 1a shows the graphical model of this process. The training objective is to maximize the lower bound of the likelihood , which can be rewritten as minimizing

(3)

The first term, called reconstruction loss, is the (expected) negative log-likelihood of data, similar to traditional deterministic autoencoders. The expectation is obtained by Monte Carlo sampling. The second term is the KL-divergence between ’s posterior and prior distributions. Typically the prior is set to standard normal .

2.2 Variational Encoder-Decoder (VED)

In some applications, we would like to transform source information to target information, e.g., machine translation, dialogue systems, and text summarization. In these tasks, “auto”-encoding is not sufficient, and an encoding-decoding framework is required. Different efforts have been made to extend VAE to variational encoder-decoder (VED) frameworks, which transform an input

to output . One possible extension is to condition all probabilistic distributions further on  [Zhang et al.2016, Cao and Clark2017, Serban et al.2017]. In this case, the posterior of is given by . This, however, introduces a discrepancy between training and prediction, since is not available during the prediction stage.

Figure 1: Graphical model representations. (a) Variational autoencoder (VAE). (b) Variational encoder-decoder (VED). (c) VED with deterministic attention (VED+DAttn). (d) VED with variational attention (VED+VAttn). Dashed lines: Encoding phase. Solid lines: Decoding phase.

Another approach is to build a recognition model using only [Zhou and Neubig2017]. Making the assumption that is a function of , i.e., , we have . In this work, we follow zhou2017morphological and adopt this extension. Figure 1b shows the graphical model of the VED used in our work.

2.3 Seq2Seq and Attention Mechanism

In NLP, sequence-to-sequence recurrent neural networks are typically used as the encoder and decoder, as they are suitable for modeling a sequence of words (i.e., sentence). Figure 2a shows a basic Seq2Seq model in the VAE/VED scenario [Bowman et al.2016]. The encoder has an input , and outputs and as the parameters of ’s posterior normal distribution. Then a decoder generates based on a sample , drawn from its posterior distribution.

Attention mechanisms are proposed to dynamically align and during generation. At each time step in the decoder, the attention mechanism computes a probabilistic distribution by

(4)

where is a pre-normalized score, computed by in our model. Here, and are the hidden representations of the th step in target and th in the source, and is a learnable weight matrix.

Then the source information is summed by weights to obtain the attention vector

(5)

which is fed to the decoder RNN at the th step. Figure 2b shows the variational Seq2Seq model with such traditional attention.

Input: the men are playing musical instruments
(a) VAE w/o hidden state init. (Avg entropy: 2.52) (b) VAE w/ hidden state init. (Avg entropy: 2.01)
the men are playing musical instruments the men are playing musical instruments
the men are playing video games the men are playing musical instruments
the musicians are playing musical instruments the men are playing musical instruments
the women are playing musical instruments the man is playing musical instruments
Table 1: Sentences obtained by sampling from the VAE’s latent space. (a) VAE without hidden state initialization. (b) VAE with hidden state initialization.

2.4 “Bypassing” Phenomenon

In this part, we explain the “bypassing” phenomenon in VAE/VED, if the network is not designed properly; this motivates our variational attention described in Section 3.

We observe that, if the decoder has a direct, deterministic access to the source, the latent variables might not capture much information so that the VAE or VED does not play a role in the process. We call this a bypassing phenomenon.

Theoretically, if is aware of by itself, i.e., becomes , it could be learned as without hurting the reconstruction loss , but the term in Eq. (3) can be minimized by fitting the posterior to its prior. This degrades a variational Seq2Seq model to a deterministic one.

The phenomenon can be best shown with a bypassing connection between the encoder and decoder for hidden state initialization. Some previous studies using VEDs set the decoder’s initial state to be the encoder’s final state [Cao and Clark2017], shown in Figure 2c. We conducted a pilot study with a Seq2Seq VAE with a subset (80k samples) of the massive dataset provided by SNLI, and show generated sentences and entropy in Table 1. We see that the variational Seq2Seq can only generate very similar sentences with such bypassing connections (Table 1b), as opposed to generating diversified samples from the latent space (Table 1a). We also computed the entropy for 10 randomly sampled outputs for a given input sentence. Quantitatively, the entropy decreases by 0.5 on average for 1k unseen input sentences. This shows a significant difference because entropy is a logarithmic metric. Our analysis sheds light on the design philosophy of neural architectures in VAE or VED.

Since attention largely improves model performance for deterministic Seq2Seq models, it is tempting to include attention in the variational Seq2Seq as well. However, our pilot experiment raises the doubt if a traditional attention mechanism, which is deterministic, may bypass the latent space in VED, as illustrated by a graphical model in Figure 1c. Also, evidence in pastfuture shows the attention mechanism is so powerful that removing other connections between the encoder and decoder has little effect on BLEU scores in machine translation. Therefore, a VED with deterministic attention might learn reconstruction mostly from attention, whereas the posterior of the latent space can fit to its prior in order to minimize the term.

To alleviate this problem, we propose a variational attention mechanism for variational Seq2Seq models, as is described in detail in the next section.

3 The Proposed Variational Attention

Figure 2: (a) Variational Seq2Seq model. (b) Variational Seq2Seq with deterministic attention. (c) Variational Seq2Seq with hidden state initialization. (d) Variational Seq2Seq with variational attention.

Let us consider the decoding process of an RNN. At each timestep , it adjusts its hidden state with an input of a word embedding (typically the groundtruth during training and the prediction from the previous step during testing). This is given by

. In our experiments, we use long short-term memory units 

[Hochreiter and Schmidhuber1997] as RNN’s transition. Enhanced with attention, the RNN is computed by

. The predicted word is given by a softmax layer

, where is a weight matrix. As discussed earlier, traditional attention computes in a deterministic fashion by Eq. (5).

To build a variational attention, we treat both the traditional latent space and the attention vector as random variables. The recognition and reconstruction graphical models are shown in Figure 1d.

3.1 Lower Bound

Since the likelihood of the th data point decomposes for different time steps, we consider the lower bound at the th step. The variational lower bound in Eq. (2) becomes

(6)
(7)

Eq. (7) is due to the independence in both recognition and reconstruction phrases. The posterior factorizes as because and are conditionally independent given (dashed lines in Figure 1d), whereas the prior factorizes because and are marginally independent (solid lines in Figure 1d). In this way, the sampling procedure can be done separately and the loss can also be computed independently.

3.2 Prior

We propose two plausible prior distributions for .

  • The simplest prior, perhaps, is the standard normal, i.e., . This follows the prior of the latent space as in a conventional autoencoder [Kingma and Welling2014, Bowman et al.2016].

  • We observe that the attention vector has to be inside the convex hull of hidden representations of the source sequence, i.e., . We impose a normal prior whose mean is the average of , i.e., , where , making the prior non-informative.

3.3 Posterior

We model the posterior of as a normal distribution , where the parameters and are obtained by a recognition neural network. Similar to VAEs, we compute parameters as if in the deterministic attention in Eq. (5) (denoted by in this part) and then transform them by another layer, shown in Figure 2d.

For the mean , we apply an identity transformation, i.e., . The identify transformation makes much sense as it preserves the spirit of “attention.” To compute , we first transform by a neural layer with

activation. The resulting vector then undergoes a linear transformation followed by an

activation function to ensure that the values are positive.

3.4 Training Objective

The overall training objective of Seq2Seq with both variational latent space and variational attention is to minimize

(8)

Here, we have a hyperparameter to balance the reconstruction loss and KL losses. further balances the attention’s KL loss and ’s KL loss. Since VAE and VED are tricky with Seq2Seq models (e.g., requiring KL annealing), we tie the change of both KL terms and only anneal . (Training details will be presented in Section 4.1.)

Notice that if has a prior of , the derivative of the KL term also goes to

. This can be computed straightforwardly or by auto-differentiation tools, e.g., TensorFlow.

3.5 Geometric Interpretation

We present a geometric interpretation of both deterministic and variational attention mechanisms in Figure 3.

Suppose the hidden representations is of -dimensional space (represented as a 3-d space in Figure 3). In the deterministic mechanism, the attention model is a convex combination of , as the weights in Eq. (5) are a probabilistic distribution. The attention vector is a point in the convex hull , shown in Figure 3a.

For variational attention in Figures 3b and 3c, the mean of posterior is still in the convex hull, but the sample drawn from the posterior is populated over the entire space (although mostly around the mean, shown as a ball). The difference between the two variants is that the standard normal prior pulls the posterior to the origin, whereas the prior pulls the posterior to the mean of (indicated by red arrows).

Finally we would like to present a (potential) alternative of modeling variational attention. Instead of treating as random variables, we might also treat as random variables. Since

is the parameter of a categorical distribution, its conjugate prior is a Dirichlet distribution. In this case, the resulting attention vector populates the entire convex hull (Figure 

3d). However, it relies on a reparametrization trick to propagate reconstruction error’s gradient back to the recognition neural network [Kingma and Welling2014]. In other words, the sampling of latent variables should be drawn from a fixed distribution (without parameters) and then transformed to a desired sample using the distribution’s parameters. This is nontrivial for Dirichlet distributions and further research is needed to address this problem.

Figure 3: Geometric interpretation of attention mechanisms.

4 Experiments

We evaluated our model on two tasks: question generation (Section 4.1) and dialog systems (Section 4.2).

4.1 Experiment I: Question Generation

Model Inference BLEU-1 BLEU-2 BLEU-3 BLEU-4 Entropy Dist-1 Dist-2
DED (w/o Attn) [Du et al.2017] MAP 31.34 13.79 7.36 4.26 - - -
DED (w/o Attn) MAP 29.31 12.42 6.55 3.61 - - -
DED+DAttn MAP 30.24 14.33 8.26 4.96 - - -
VED+DAttn MAP 31.02 14.57 8.49 5.02 - - -
Sampling 30.87 14.71 8.61 5.08 2.214 0.132 0.176
VED+DAttn (2-stage training) MAP 28.88 13.02 7.33 4.16 - - -
Sampling 29.25 13.21 7.45 4.25 2.241 0.140 0.188
VED+VAttn- MAP 29.70 14.17 8.21 4.92 - - -
Sampling 30.22 14.22 8.28 4.87 2.320 0.165 0.231
VED+VAttn- MAP 30.23 14.30 8.28 4.93 - - -
Sampling 30.47 14.35 8.39 4.96 2.316 0.162 0.228
Table 2: BLEU, entropy, and distinct scores. We compare the deterministic encoder-decoder (DED) and variational encoder-decoders (VEDs). For VED, we have several variates: deterministic attention (DAttn) and the proposed variational attention (VAttn). Variational models are evaluated by both max a posteriori (MAP) inference and sampling.

Task, Dataset, and Metrics.

We first evaluated our approach on a question generation task. It uses the Stanford Question Answering Dataset  [Rajpurkar et al.2016, SQuAD], and aims to generate questions based on a sentence in a paragraph. We used the same train-validation-test split as in Qgen. According to Qgen, the attention mechanism is especially critical in this task in order to generate relevant questions. Also, generated questions do need some variety (e.g., in the creation of reading comprehension datasets), as opposed to machine translation, which is typically deterministic.

We followed Qgen and used BLEU-1 to BLEU-4 scores [Papineni et al.2002] to evaluate the quality (in the sense of accuracy) of generated sentences. Besides, we adopted entropy and distinct metrics to measure the diversity. Entropy is computed as , where

is the unigram probability in generated sentences.

Distinct metrics—used in previous works to measure diversity [Li et al.2016]—computes the percentage of distinct unigrams or bigrams (denoted as Dist-1 and Dist-2, respectively).

Training Details.

We used LSTM-RNNs with 100 hidden units for both the encoder and decoder; the dimension of the latent vector was also 100d. We adopted 300d word embeddings [Mikolov et al.2013], pretrained on the SQuAD dataset. For both the source and target sides, the vocabulary was limited to the most frequent 40k tokens. We used the Adam optimizer [Kingma and Ba2015] to train all models, with an initial learning rate of 0.005, a multiplicative decay of 0.95, and other default hyperparameters. The batch size was set to be 100.

As shown in Vseq2seq, Seq2Seq VAE is hard to train because of the issues associated with the term vanishing to zero. Following Vseq2seq, we adopted cost annealing and word dropout during training. The coefficient of the KL term was gradually increased using a logistic annealing schedule, allowing the model to learn to reconstruct the input accurately during the early stages of training. A fixed word dropout rate of was used.

All the hyperparameter tuning was based on validation performance on the motivating Seq2Seq VAE discussed in Section 2.4, and the same hyperparameters were used for all of the models described in Section 3.

Overall Performance.

Table 2 represents the performance of various models. We first implemented a traditional vanilla Seq2Seq model, which we call a deterministic encoder-decoder (DED), and generally replicated the results on the question generation task as reported in Qgen, showing that our implementation is fair. Incorporating attention mechanism in this model (DED+DAttn) improves BLEU scores, as expected. In the variational encoder-decoder (VED) framework, we report results obtained by both max a posterior (MAP) inference as well as sampling. In the sampling setting, we draw 10 samples ( and/or ) from the posterior given for each data point, and report average BLEU scores.

The proposed variational attention model (VED+VAttn) largely outperforms deterministic attention (VED+DAttn) in terms of all diversity metrics. It should be noted that entropy is a logarithmic measure, and hence a difference of 0.1 in Table 2 is significant; VED+VAttn also generates more distinct unigrams and bigrams than VED+DAttn.

Regarding the prior of variational attention, we propose two variants: and , denoted as VED+VAttn-0 and VED+VAttn-, respectively. VED+VAttn-0 has slightly lower BLEU but higher diversity. The results are generally comparable, showing both priors are reasonable.

We also tried a heuristic of 2-stage training (VED+DAttn 2-stage), in which the VED is first trained without attention for 6 epochs, and then the attention mechanism is added to the model. This heuristic is proposed in hopes of better training the variational latent space at the beginning stages. However, experiments show that such simple heuristic does not help much, and is worse than the principled variational attention mechanism in terms of all BLEU and diversity metrics.

Figure 4: BLEU-2, BLEU-4, Entropy, and Dist-1 calculated on the validation set as training progresses.
Figure 5: BLEU-2, BLEU-4, Entropy, and Dist-1 with different values.

Human Evaluation.

In order to assess the quality of the generated text in terms of language fluency, a human evaluation study was carried out. For each of the two models under comparison (VED+DAttn and VED+VAttn-), a randomly shuffled subset of 100 generated questions were selected. Six human evaluators were asked to rate the fluency of these 200 questions on a 5-point scale: 5-Flawless, 4-Good, 3-Adequate, 2-Poor, 1-Incomprehensible, following stent2005evaluating. The average rating obtained for VED+DAttn was 3.99 and for VED+VAttn- was 4.01, the difference between which is not statistically significant. The human annotations achieved 0.61 average Spearman correlation coefficient (measuring order correlation) between any two annotators. According to swinscow1976statistics, this indicates moderate to strong correlation among different annotators. Hence, we conclude variational attention does not negatively affect the fluency of sentences.

Learning curves.

Figure 4 shows the trends of sentence quality (BLEU-2 and BLEU-4) and diversity (entropy and Dist-1) of all models on the validation set, as training progresses.222Other metrics are omitted because the trend is the same. We see that BLEU and diversity are conflicting objectives: a high BLEU score indicates resemblance to the groundtruth, resulting in low diversity. However, the variational attention mechanisms (red and green lines in Figure 4) remain high in both aspects, showing the effectiveness of our model.

Source when the british forces evacuated at the close of the war in 1783 ,
they transported 3,000 freedmen for resettlement in nova scotia .
Reference in what year did the american revolutionary war end ?
VED+DAttn how many people evacuated in newfoundland ?
how many people evacuated in newfoundland ?
what did the british forces seize in the war ?
VED+Vattn- how many people lived in nova scotia ?
where did the british forces retreat ?
when did the british forces leave the war ?
Source downstream , more than 200,000 people were evacuated from
mianyang by june 1 in anticipation of the dam bursting .
Reference how many people were evacuated downstream ?
VED+DAttn how many people evacuated from the mianyang basin ?
how many people evacuated from the mianyang basin ?
how many people evacuated from the mianyang basin ?
VED+VAttn- how many people evacuated from the tunnel ?
how many people evacuated from the dam ?
how many people were evacuated from fort in the dam ?
Table 3: Case study of question generation.

Strength of Attention’s KL Loss.

We tuned the KL term’s strength in variational attention, i.e., in Eq. (8), and plot the BLEU and diversity metrics in Figure 5. In this experiment, we used the VED+DAttn- variant. As shown, a decrease in increases the quality of generated sentences at the cost of diversity. This is expected because a lower gives the model less incentive to optimize the attention’s KL term, which then causes the model to behave more “deterministic.” Based on this experiment, we chose a value of 0.1 for , as it yields a learning curve in the middle among those of different hyperparameters, being a good balance between quality and diversity.

It should be further mentioned that, with a milder (e.g., 0.01), VED+VAttn outperforms VED+DAttn in terms of both quality and diversity (on the validation set). This is consistent with the evidence that variational latent space may serve as a way of regularization and improves quality [Zhang et al.2016]. However, a small only slightly improves diversity, and hence we did not choose this hyperparameter in Table 2.

Case study.

We show in Table 3 two examples of generated sentences by VED+DAttn and VED+VAttn-, each containing three random sentences drawn from the variational latent space(s) for a given input. In both examples, the variational attention generates more diversified sentences than deterministic attention. The quality of generated sentences is close in both models.

4.2 Experiment II: Dialog Systems

Model Inference BLEU-2 Entropy Dist-1 Dist-2
DED+DAttn MAP 1.84
VED+DAttn MAP 1.68
Sampling 1.68 2.113 0.311 0.450
VED+VAttn- MAP 1.78
Sampling 1.79 2.167 0.324 0.467
Table 4: Performance on conversation systems.

We present another experiment on generative conversation systems. The goal is to generate a reply based on a user-issued utterance. We used the Cornell Movie-Dialogs Corpus333https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html [Danescu-Niculescu-Mizil and Lee2011] as our dataset, which contains more than 200k conversational exchanges. All the settings in this experiment were the same as in Subsection 4.1 except that we had 30k words as the vocabulary for both the encoder and decoder. We evaluated the quality of generated replies with BLEU-2, as it has been observed to be more or less correlated with human annotators among the BLEU metrics [Liu et al.2016].

Table 4 shows the performance of our model (VED-VAttn-) compared with two main baselines. We see that VEDs are slightly worse than the deterministic encoder-decoder (DED) in this experiment. However, variational attention outperforms deterministic attention in terms of both quality and diversity, showing that our model is effective in different applications. However, we find the improvement is not so large as in the previous experiment. We conjecture that in conversational systems, there is a weaker alignment between the source and target information. Hence, the attention mechanism itself is less effective.

5 Related Work

The variational autoencoder (VAE) was proposed by kingma2013auto for image generation. In NLP, it has been used to generate sentences [Bowman et al.2016]. VHRED propose a variational encoder-decoder (VED) model to generate better (more diverse and thus meaningful) replies in a dialog system. VED frameworks have also been applied to knowledge base reasoning [Zhang et al.2018]. Another thread of VAE/VED applications is to control some characteristics of generated data, such as the angle of a face image [Kumar et al.2017], and the sentiment of a sentence [Hu et al.2017].

In this paper, the focus is on the scenario where VED is combined with attention mechanism. We show that the variational attention space is effective, in terms of the diversity of sampled sentences (since VEDs are probabilistic models). Although previous studies have addressed diversity using diversified beam search [Vijayakumar et al.2016] and determinantal point processes [Song et al.2018], we would like to point out that our paper is “orthogonal” to those studies. The diversity in our approach arises through probabilistic modeling, as opposed to a manually specified heuristic function of the diversity metric. It is to be noted that our approach can be naturally combined with the above methods.

6 Conclusion and Future Work

In this paper, we proposed a variational attention mechanism for variational encoder-decoder (VED) frameworks. We observe that, in VEDs, if the decoder has direct access to the encoder, the connection may bypass the variational space. Traditional attention mechanisms might serve as bypassing connection, making the output less diverse. Our variational attention mechanism imposes a probabilistic distribution on the attention vector. We also proposed different priors for the attention vector. The proposed model was evaluated on two tasks: question generation and dialog systems, showing that variational attention yields more diversified samples while retaining high quality.

In future work, it would be interesting to investigate VEDs that model the attention probability with Dirichlet distributions (see Figure 3d). Our framework also provides a principled methodology for designing variational encoding-decoding models without the bypassing phenomenon.

Acknowledgments

We thank Hao Zhou for helpful discussions. The Titan Xp GPU used for this research was donated by the NVIDIA Corporation to Olga Vechtomova.

References

  • [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations.
  • [Bowman et al.2015] Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642.
  • [Bowman et al.2016] Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21.
  • [Cao and Clark2017] Kris Cao and Stephen Clark. 2017. Latent variable dialogue models and their diversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 182–187.
  • [Danescu-Niculescu-Mizil and Lee2011] Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 76–87.
  • [Du et al.2017] Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342–1352.
  • [Hinton and Salakhutdinov2006] Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507.
  • [Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.
  • [Hu et al.2017] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In

    Proceedings of the 34th International Conference on Machine Learning

    , pages 1587–1596.
  • [Kingma and Ba2015] Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations.
  • [Kingma and Welling2014] Diederik P Kingma and Max Welling. 2014. Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations.
  • [Kumar et al.2017] Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. 2017. Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848.
  • [Li et al.2016] Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119.
  • [Liu et al.2016] Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016.

    How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation.

    In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132.
  • [Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119.
  • [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318.
  • [Rajpurkar et al.2016] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392.
  • [Rush et al.2015] Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389.
  • [Serban et al.2017] Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In

    Proceedings of the 31st AAAI Conference on Artificial Intelligence

    , pages 3295–3301.
  • [Song et al.2018] Yiping Song, Rui Yan, Yansong Feng, Yaoyuan Zhang, Zhao DongYan, and Ming Zhang. 2018. Towards a neural conversation model with diversity net using determinantal point processes. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 5932–5939.
  • [Stent et al.2005] Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Proceedings of International Conference on Intelligent Text Processing and Computational Linguistics, pages 341–351.
  • [Swinscow1976] TD Swinscow. 1976. Statistics at square one: Xviii-correlation. British Medical Journal, 2(6037):680.
  • [Vijayakumar et al.2016] Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424.
  • [Wainwright et al.2008] Martin J Wainwright, Michael I Jordan, et al. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, pages 1–305.
  • [Zhang et al.2016] Biao Zhang, Deyi Xiong, jinsong su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 521–530.
  • [Zhang et al.2018] Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J Smola, and Le Song. 2018.

    Variational reasoning for question answering with knowledge graph.

    In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
  • [Zheng et al.2018] Zaixiang Zheng, Hao Zhou, Shujian Huang, Lili Mou, Xinyu Dai, Jiajun Chen, and Zhaopeng Tu. 2018. Modeling past and future for neural machine translation. Transactions of the Association for Computational Linguistics, pages 145–157.
  • [Zhou and Neubig2017] Chunting Zhou and Graham Neubig. 2017. Morphological inflection generation with multi-space variational encoder-decoders. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 58–65.