Sampling Good Latent Variables via CPP-VAEs: VAEs with Condition Posterior as Prior

12/18/2019 ∙ by Sadegh Aliakbarian, et al. ∙ 0

In practice, conditional variational autoencoders (CVAEs) perform conditioning by combining two sources of information which are computed completely independently; CVAEs first compute the condition, then sample the latent variable, and finally concatenate these two sources of information. However, these two processes should be tied together such that the model samples a latent variable given the conditioning signal. In this paper, we directly address this by conditioning the sampling of the latent variable on the CVAE condition, thus encouraging it to carry relevant information. We study this specifically for tasks that leverage with strong conditioning signals and where the generative models have highly expressive decoders able to generate a sample based on the information contained in the condition solely. In particular, we experiments with the two challenging tasks of diverse human motion generation and diverse image captioning, for which our results suggest that unifying latent variable sampling and conditioning not only yields samples of higher quality, but also helps the model to avoid the posterior collapse, a known problem of VAEs with expressive decoders.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 34

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep generative models offer promising results in generating diverse, realistic samples, such as images, text, motion, and sound, from purely unlabeled data. One example of such successful generative models are variational autoencoders [17] (VAEs), the stochastic variant of autoencoders, which, thanks to strong and expressive decoders, can generate high-quality samples.

VAEs are a family of generative models that utilize neural networks to learn the distribution of the data. To this end, VAEs first learn to generate a latent variable

given the data , i.e., approximate the posterior distribution , where

are the parameters of a neural network, the encoder, whose goal is to model the variation of the data. From this latent random variable

, VAEs then generate a sample by learning , where denotes the parameters of another neural network, the decoder, whose goal is to maximize the log likelihood of the data.

Figure 1: Training and inference for a standard CVAE. (Left) During the training phase, the encoder takes as input the concatenation of the data and the corresponding condition and compresses it into the latent space. The decoder then samples a latent variable from the approximate posterior computed by the encoder, concatenates the latent variable with the conditioning signal, and reconstructs the data. The approximate posterior distribution of all training samples, i.e., for all conditioning signals, is encouraged to match the prior distribution. (Right) During inference, the decoder samples different latent variables from the prior distribution, concatenates them with the condition, and generates samples that all respect the conditioning signal. However, there is no guarantee that the latent variable is sampled from the region of the prior that corresponds to the condition.

These two networks, i.e., the encoder () and the decoder (

), are trained jointly, using a prior over the latent variable. This prior is usually the standard Normal distribution,

. Note that VAEs use a variational approximation of the posterior, i.e., , rather than the true posterior. This enables the model to maximize the variational lower bound of the log likelihood with respect to the parameters and , given by

(1)

where the second term encodes the KL divergence between the posterior and the prior distributions. In practice, the posterior distribution is approximated by a Gaussian , whose parameters are output by the encoder. Note,

is a vector and we define

as a vector whose elements are the squared elements of . To facilitate optimization, the reparameterization trick [17] is used. That is, the latent variable is computed as

(2)

where is a vector sampled from the standard Normal distribution . As an extension to VAEs, CVAEs use auxiliary information, i.e., the conditioning variable or observation, to generate the data . In the standard setting, both the encoder and the decoder are conditioned on the conditioning variable . That is, the encoder is denoted as and the decoder as . Then, the objective of the model becomes

(3)

As illustrated in Fig. 1, in practice, conditioning is typically done by concatenation; the input of the encoder is the concatenation of the data and the condition , i.e., , and that of the decoder the concatenation of the latent variable and the condition , i.e., . Thus, the prior distribution is still , and the latent variable is sampled independently of the conditioning one. However, given Eq. 3, one should use . This means that it is then left to the decoder to combine the information from the latent and conditioning variables to generate a data sample. We see this as a major limitation when using CVAEs in practice, and, to the best of our knowledge, this problem has never been studied nor addressed in the literature.

In this paper, we introduce an approach to overcome this limitation by explicitly making the sampling of the latent variable depend on the condition, In other words, instead of using as prior distribution, we truly use . This not only respects the theory behind the design of CVAEs, but, as we empirically evidence, it leads to generating samples of higher quality, that preserve the context of the conditioning signal. To achieve this, we develop a CVAE architecture that learns a distribution not only of the latent variable but also of the conditioning one. We then use this distribution as a prior over the latent variable, making its sampling explicitly dependent on the condition. As such, we name our method CPP-VAE, for Condition Posterior as Prior.

We empirically show the effectiveness of our approach for problems that are stochastic in nature. In particular, we focus on scenarios where the training dataset is deterministic, i.e., one condition per data sample, and the conditioning signal is strong enough for an expressive decoder to generate a plausible output from it. Thus, the model does not need to see an informative latent variable to generate a sample of high quality. We also show that, by unifying latent variable sampling and conditioning, we can mitigate posterior collapse problem, a known problem of VAEs with expressive decoders. This is mainly due to the fact that the decoder no longer receives two separate sources of information, i.e., the latent variable and the condition, thus, the model is prevented from identifying the latent variable and then ignore it [2].

As a stochastic problem, we evaluate our approach on diverse human motion prediction, that is, forecasting future 3D poses given a sequence of observed ones. In this context, existing methods typically fail to model the stochastic nature of human motion, either because they learn a deterministic mapping from the observations to the output, or because the stochastic latent variables they combine with the observations can be ignored by the model. As an alternative application, we also evaluate our approach on image captioning, i.e., generating diverse and plausible captions describing an image. Our empirical results show that not only does our approach yield a much wider variety of plausible samples than concatenation-based stochastic methods, but it also preserves the semantic information of the condition, such as the type of action performed by the person in motion prediction or visual image elements in captioning, without explicitly exploiting this information.

Remark: Relation to the posterior collapse problem.

Training conditional generative latent variable models is challenging due to posterior collapse, typically occurring in scenarios when the conditioning signal is strong and the decoder is expressive enough to generate a plausible sample given only the condition. This phenomenon of posterior collapse is even more severe when training on a deterministic dataset, i.e., having one sample per condition. Posterior collapse manifests itself by the KL divergence term becoming zero, which means that, regardless of the input, the approximate posterior distribution is equal to the prior distribution. In other words, there is no semantic connection between the encoder and the decoder, and thus the latent variable drawn from the approximate posterior does not convey any useful information to obtain an input-dependent reconstruction. In this case, the decoder generates samples that approximate the mean of the whole training set, minimizing the reconstruction loss. We found that one of the major reasons behind posterior collapse in the case of conditional VAEs with strong conditioning signals and expressive decoders is rooted in the conventional way of conditioning, i.e., through concatenation of the latent variable and the condition. Concatenation allows the decoder to decouple the latent variable from the deterministic condition, thus allowing the decoder to optimize its reconstruction loss given only the condition.

2 Unifying Sampling Latent Variable and Conditioning

In this section, we introduce our approach as a general framework with a new conditioning scheme for CVAEs that is capable of generating diverse and plausible samples where the latent variables are sampled from an appropriate region of the prior distribution. In essence, our framework consists of two autoencoders, one acting on the conditioning signal and the other on the samples we wish to learn the distribution of. The latent representation of the condition then serves as conditioning variable to generate the desired samples.

As discussed above, we are interested in problems that are stochastic in nature; given one condition, multiple plausible and natural samples are likely. However our training data is insufficiently sampled, in that for any given condition, the dataset contains only a single observed sample, in effect making the data appear deterministic. Moreover, in these cases, the condition provides the core signal to generate a good sample, even in a deterministic model. Therefore, it is highly likely that a CVAE trained for this task learns to ignore the latent variable and rely only on the condition to produce its output (related to the posterior collapse problem in strongly conditioned VAEs [2]). Below, we address this by forcing the sampling of the random latent variable to depend on the conditioning one. By making this dependency explicit, we (1) sample an informative latent variable given the condition, and thus generate a sample of higher quality, and (2) prevent the network from ignoring the latent variable in the presence of a strong condition, thus enabling it to generate diverse outputs.

Note that conditioning the VAE encoder via standard strategies, e.g., concatenation, is perfectly fine, since the two inputs to the encoder are deterministic and useful to compress the sample into the latent space. However, conditioning the VAE decoder requires special care that we focus on below.

Figure 2: Illustration of a CVAE architecture and a CPP-VAE architecture. (Top) On the left, there is the architecture of a standard CVAE where the representation of the conditioning signal is concatenated to the input of the CVAE encoder and decoder. On the right, there is the illustration of the reparameterization trick to sample a latent variable from the approximate posterior. Note the approximate posterior is normally distributed. (Bottom) On the left, there is the architecture of a CPP-VAE where the model not only learns the distribution of the data, but also that of the conditioning signal. The posterior distribution of the condition then acts as the prior to the data posterior. On the right, there is the illustration of our extended reparameterization trick to sample a latent variable from the new approximate posterior. As shown, the latent variable is sampled given the condition, i.e., . Note the approximate posterior is not normally distributed anymore.

2.1 Stochastically Conditioning the Decoder

We propose to make the sampling of the latent variable from the prior/posterior distribution explicitly depend on the condition instead of treating these two variables as independent. To this end, we first learn the distribution of the condition via a simple VAE, which we refer to as CS-VAE because this VAE acts on the conditioning signal. The goal of CS-VAE is to reconstruct the condition given its latent representation. We take the prior of CS-VAE as a standard Normal distribution . Following [17], this allows us to approximate the CS-VAE posterior with another sample from a Normal distribution . That is, we write

(4)

where and are the parameters of the posterior distribution generated by the VAE encoder.

Following the same strategy for the VAE on the data, called CPP-VAE, translates to treating the conditioning and the data latent variables independently, which we want to avoid. Therefore, as illustrated in Fig. 2 (Right), we instead define the CPP-VAE posterior as not directly normally distributed, but conditioned on the posterior of CS-VAE. To this end, we extend the standard reparameterization as

(5)

where comes from Eq. 4. In fact, in Eq. 4 is a sample from the scaled and translated version of given and , and in Eq. 5 is a sample from the scaled and translated version of given and . Since we have access to the observations during both training and testing, we always sample from the condition posterior. As is sampled given , one expects the latent variable to carry information about the strong condition, and thus a sample generated from to correspond to a plausible sample given the condition. This extended reparameterization trick allows us to avoid conditioning the CPP-VAE decoder by concatenating the latent variable with a deterministic representation of the condition, thus mitigating posterior collapse. However, it changes the variational family of the CPP-VAE posterior. In fact, the posterior is no longer

, but a Gaussian distribution with mean

and covariance matrix . This will be accounted for when designing the KL divergence loss discussed below.

Figure 3: Inference procedure in a CVAE and a CPP-VAE. (Left) Inference of a CPP-VAE: At inference time, since we have access to the conditioning signal, we can use CS-VAE’s encoder to approximate the posterior of each condition. In order to generate a sample given a condition, the CPP-VAE’s decoder then samples a latent variable from the posterior of its condition (rather than a general prior distribution as in CVAEs) and generate a sample. For instance, and are conditioned on and their corresponding latent variables are sampled from a very similar region. However, is generated by a latent variable sampled from a completely different region, i.e., the region corresponds to the approximate posterior of .

2.2 Learning

To learn the parameters of our model, we rely on the availability of a dataset containing training samples . Each training sample is a pair of condition and desired sample. For CS-VAE, that learns the distribution of the condition, we define the loss as the KL divergence between its posterior and the standard Gaussian prior, that is,

(6)

By contrast, for CPP-VAE, we define the loss as the KL divergence between the posterior of CPP-VAE and the posterior of the CS-VAE, i.e., of the condition. To this end, we freeze the weights of CS-VAE before computing the KL divergence, since we do not want to move the posterior of the condition but that of the data. The KL divergence is then computed as the divergence between two multivariate Normal distributions, encoded by their mean vectors and covariance matrices, as

(7)

Let , , be the dimensionality of the latent space and the trace of a square matrix, the loss in Eq. (7) can be written as111See Appendix 0.B for more details on the KL divergence between two multivariate Gaussians and the derivation of Eq. 8.

(8)

After computing the loss in Eq. 8, we unfreeze CS-VAE and update it with its previous gradient. Trying to match the posterior of CPP-VAE to that of CS-VAE allows us to effectively use our extended reparameterization trick in Eq. 5. Furthermore, we use the standard reconstruction loss for both CS-VAE and CPP-VAE, minimizing the negative log-likelihood (NLL) or the mean squared error (MSE) of the condition and the corresponding data, given the task. We refer to the reconstruction losses as and for CS-VAE and CPP-VAE. Thus, our complete loss is

(9)

In practice, since our VAE appears within a recurrent model, we weigh the KL divergence terms by a function corresponding to the KL annealing weight of [5]. We start from , forcing the model to encode as much information in as possible, and gradually increase it to during training, following a logistic curve. We then continue training with .

In short, our method can be interpreted as a simple yet effective framework (designed for CVAEs) for altering the variational family of the posterior such that (1) a latent variable from this posterior distribution is explicitly sampled given the condition, both during training and inference time, and (2) the model is being prevented from posterior collapse by making sure that there is a positive mismatch between the two distributions in the KL loss of Eq. 8.

3 Experiments

In this paper, we mainly focus on stochastic human motion prediction, where given partial observation, the task is to generate diverse and plausible continuations. Additionally, to show that our CPP-VAE generalizes to other domains, we tackle the problem of stochastic image captioning, where given an image representation, the task is to generate diverse yet related captions.

3.1 Diverse Human Motion Prediction

Dataset.

To evaluate the effectiveness of our approach on the task of stochastic human motion prediction, we use the Human3.6M dataset [15], the largest publicly-available motion capture (mocap) dataset. Human3.6M comprises more than 800 long indoor motion sequences performed by 11 subjects, leading to 3.6M frames. Each frame contains a person annotated with 3D joint positions and rotation matrices for all 32 joints. In our experiments, for our approach and the replicated VAE-based baselines, we represent each joint in 4D quaternion space. We follow the standard preprocessing and evaluation settings used in [27, 11, 29, 16]. We also evaluate our approach on a real-world dataset, Penn Action [38], which contains 2326 sequences of 15 different actions, where for each person, 13 joints are annotated in 2D space. The results on Penn Action are provided in Appendix 0.F.

Evaluation Metrics.

To quantitatively evaluate our approach and other stochastic motion prediction baselines [35, 4, 33, 2]

, we report the estimated upper bound on the reconstruction error as ELBO, along with the KL-divergence on the held-out test set. Additionally, we also use quality 

[2] and diversity [36, 2, 37] metrics (which should be considered together), a context metric, and the training KL at convergence. To measure the diversity of the motions generated by a stochastic model, we make use of the average distance between all pairs of the

motions generated from the same observation. To measure quality, we train a binary classifier to discriminate real (ground-truth) samples from fake (generated) ones. The accuracy of this classifier on the test set is inversely proportional to the quality of the generated motions. Context is measured by the performance of a good action classifier 

[22] trained on ground-truth motions. The classifier is then tested on each of the motions generated from each observation. For observations and

continuations per observation, the accuracy is measured by computing the argmax over each prediction’s probability vector, and we report context as the mean class accuracy on the

motions. For all metrics, we use motions per test observation. We also provide qualitative results in Appendix 0.L. For all experiments related to motion prediction, we use 16 frames (i.e., 640ms) as observation to generate the next 60 frames (i.e., 2.4sec).

Evaluating Stochasticity.

In Table 1, we compare our approach (with the architecture described in Appendix 0.H) with the state-of-the-art stochastic motion prediction models [35, 2, 33, 4]. Note that one should consider the reported metrics jointly to truly evaluate a stochastic model. For instance, while MT-VAE [35] and HP-GAN [4] generate high-quality motions, they are not diverse. Conversely, while Pose-Knows [33] generates diverse motions, they are of low quality. On the other hand, our approach generates both high quality and diverse motions. This is also the case of Mix-and-Match [2], which, however, preserves much less context. In fact, none of the baseline can effectively convey the context of the observation to the generated motions properly. As shown in Table 2, the upper bound for the context on Human3.6M is 0.60 (i.e., the classifier [22] performance given the ground-truth motions). Our approach yields a context of 0.54 when given only about 20% of the data. Altogether, our approach yields diverse, high-quality and context-preserving predictions. This is further evidenced by the t-SNE [25] plots of Fig. 5, where different samples of various actions are better separated for our approach than for, e.g., MT-VAE [35]. We refer the reader to the human motion prediction related work section in Appendix 0.C for a brief overview of the baselines. We also encourage reading Appendix 0.D

for further discussion of the aforementioned baselines and a deeper insight of their behavior under different evaluation metrics.

Figure 4: Qualitative evaluation of the diversity in human motion. The first row illustrates the ground-truth motion. The first six poses of each row depict the observation (the condition) and the rest are sampled from our model. Each row is a randomly sampled motion (not cherry picked). As can be seen, all sampled motions are natural, with a smooth transition from the observed to the generated ones. The diversity increases as we increase the sequence length.
ELBO (KL) Diversity Quality Context Training KL
Method (Reconstructed) (Sampled) (Sampled) (Sampled) (Reconstructed)
MT-VAE [35] 0.51 (0.06) 0.26 0.45 0.42 0.08
Pose-Knows [33] 2.08 (N/A) 1.70 0.13 0.08 N/A
HP-GAN [4] 0.61 (N/A) 0.48 0.47 0.35 N/A
Mix-and-Match [2] 0.55 (2.03) 3.52 0.42 0.37 1.98
CPP-VAE 0.41 (8.07) 3.12 0.48 0.54 6.93
Table 1: Comparison of CPP-VAE with the stochastic motion prediction baselines on Human3.6M.
MT-VAE [35] CPP-VAE () CPP-VAE ()
Figure 5: t-SNE plots of the posterior mean for 3750 test motions. With MT-VAE [35], all classes are mixed, suggesting that the latent variable carries little information about the input. By contrast, our condition-dependent sampling allows CPP-VAE to better preserve context. Note that some actions, such as discussion and directions, are very hard to identify and are thus spread over other actions. Others, such as walking, walking with dog, and walking together or sitting and sitting down overlap due to their similarity.
Setting Obs. Future Motion Context
Lower bound GT Zero velocity 0.38
Upper bound (GT poses as future motion) GT GT 0.60
Ours (sampled motions as future motion) GT Sampled from CPP-VAE 0.54
Table 2: Comparison of the generated motions with the ground-truth future motion in terms of context. The gap between the performance of the state-of-the-art pose-based action classifier [22] with and without true future motions is 0.22. Using our predictions, this gap decreases to 0.06, showing that our predictions reflect the class label.
Walking Eating
Method 80 160 320 400 560 1000 80 160 320 400 560 1000
MT-VAE [35] 0.73 0.79 0.90 0.93 0.95 1.05 0.68 0.74 0.95 1.00 1.03 1.38
HP-GAN [4] 0.61 0.62 0.71 0.79 0.83 1.07 0.53 0.67 0.79 0.88 0.97 1.12
Pose-Knows [33] 0.56 0.66 0.98 1.05 1.28 1.60 0.44 0.60 0.71 0.84 1.05 1.54
Mix&Match [2] 0.33 0.48 0.56 0.58 0.64 0.68 0.23 0.34 0.41 0.50 0.61 0.91
CPP-VAE 0.22 0.36 0.47 0.52 0.58 0.69 0.19 0.28 0.40 0.51 0.58 0.90
Smoking Discussion
Method 80 160 320 400 560 1000 80 160 320 400 560 1000
MT-VAE [35] 1.00 1.14 1.43 1.44 1.68 1.99 0.80 1.01 1.22 1.35 1.56 1.69
HP-GAN [4] 0.64 0.78 1.05 1.12 1.64 1.84 0.79 1.00 1.12 1.29 1.43 1.71
Pose-Knows [33] 0.59 0.83 1.25 1.36 1.67 2.03 0.73 1.10 1.33 1.34 1.45 1.85
Mix&Match [2] 0.23 0.42 0.79 0.77 0.82 1.25 0.25 0.60 0.83 0.89 1.12 1.30
CPP-VAE 0.23 0.43 0.77 0.75 0.78 1.23 0.21 0.52 0.81 0.84 1.04 1.28
Table 3: Comparison with the state-of-the-art stochastic motion prediction models for 4 actions of Human3.6M (all methods use the best of sampled motions).
Walking Eating
Method 80 160 320 400 560 1000 80 160 320 400 560 1000
Zero Velocity 0.39 0.86 0.99 1.15 1.35 1.32 0.27 0.48 0.73 0.86 1.04 1.38
LSTM-3LR [9] 1.18 1.50 1.67 1.76 1.81 2.20 1.36 1.79 2.29 2.42 2.49 2.82
SRNN [16] 1.08 1.34 1.60 1.80 1.90 2.13 1.35 1.71 2.12 2.21 2.28 2.58
DAE-LSTM [10] 1.00 1.11 1.39 1.48 1.55 1.39 1.31 1.49 1.86 1.89 1.76 2.01
GRU [27] 0.28 0.49 0.72 0.81 0.93 1.03 0.23 0.39 0.62 0.76 0.95 1.08
AGED [11] 0.22 0.36 0.55 0.67 0.78 0.91 0.17 0.28 0.51 0.64 0.86 0.93
DCT-GCN [26] 0.18 0.31 0.49 0.56 0.65 0.67 0.16 0.29 0.50 0.62 0.76 1.12
CPP-VAE () 0.20 0.34 0.48 0.53 0.57 0.71 0.20 0.26 0.44 0.52 0.61 0.92
Smoking Discussion
Method 80 160 320 400 560 1000 80 160 320 400 560 1000
Zero Velocity 0.26 0.48 0.97 0.95 1.02 1.69 0.31 0.67 0.94 1.04 1.41 1.96
LSTM-3LR [9] 2.05 2.34 3.10 3.18 3.24 3.42 2.25 2.33 2.45 2.46 2.48 2.93
SRNN [16] 1.90 2.30 2.90 3.10 3.21 3.23 1.67 2.03 2.20 2.31 2.39 2.43
DAE-LSTM [10] 0.92 1.03 1.15 1.25 1.38 1.77 1.11 1.20 1.38 1.42 1.53 1.73
GRU [27] 0.33 0.61 1.05 1.15 1.25 1.50 0.31 0.68 1.01 1.09 1.43 1.69
AGED [11] 0.27 0.43 0.82 0.84 1.06 1.21 0.27 0.56 0.76 0.83 1.25 1.30
DCT-GCN [26] 0.22 0.41 0.86 0.80 0.87 1.57 0.20 0.51 0.77 0.85 1.33 1.70
CPP-VAE () 0.21 0.43 0.79 0.79 0.77 1.15 0.22 0.55 0.79 0.81 1.05 1.28
Table 4: Comparison with the state-of-the-art deterministic models for 4 actions of Human3.6M.

Evaluating Sampling Quality.

To further evaluate the sampling quality, we evaluate stochastic baselines using the standard mean angle error (MAE) metric in Euler space. To this end, we use the best of the generated motions for each observation (aka S-MSE [35]). A model that generates more diverse motions has more chances of generating a motion close to the ground-truth one. As shown in Table 3, this is the case with our approach and Mix-and-Match [2], which both yield higher diversity. However, our approach performs better thanks to its context-preserving latent representation and its higher quality of the generated motions.

In Table 4, we compare our approach with the state-of-the-art deterministic motion prediction models [27, 16, 12, 9, 11] using the MAE metric in Euler space. To have a fair comparison, we generate one motion per observation by setting the latent variable to the distribution mode, i.e.,

. This allows us to generate a plausible motion without having access to the ground-truth. To compare against the deterministic baselines, we follow the standard setting, and thus use 50 frames (i.e., 2sec) as observation to generate the next 25 frames (i.e., 1sec). Surprisingly, despite having a very simple motion decoder architecture (one-layer GRU network) with a very simple reconstruction loss function (MSE), this motion-from-mode strategy yields results that are competitive with those of the baselines that use sophisticated architectures and advanced loss functions. We argue that learning a good, context-preserving latent representation of human motion is the contributing factor to the success of our approach. This, however, could be used in conjunction with sophisticated motion decoders and reconstruction losses, which we leave for future research.

In Appendix 0.E, we study alternative designs to condition the VAE encoder and decoder.

Figure 6: Qualitative evaluation of the diversity in generated captions. While captions generated by our approach are diverse, they all describe the image properly. The caption from mode also usually achieves a good descriptive caption.
Model ELBO (KL) Perplexity Quality Diversity Context Training KL
(Reconstructed) (Reconstructed) (Sampled) (Sampled) (Sampled) (Reconstructed)
Autoregressive 3.01 (N/A) 20.29 0.40 N/A 0.46 N/A
Conditional VAE 2.86 (0.00) 17.46 0.39 0.00 0.44 0.00
CPP-VAE 0.21 (3.28) 1.23 0.40 0.53 0.43 3.11
Table 5: Quantitative evaluation of stochastic image captioning on the MSCOCO Captioning dataset.

3.2 Diverse Image Captioning

For the task of conditional text generation, we focus on stochastic image captioning. To demonstrate the effectiveness of our approach, we report results on the MSCOCO 

[23] captioning task with the original train/test splits of 83K and 41K images, respectively. The MSCOCO dataset has five captions per image. However, we make it deterministic by removing four captions per image, yielding a Deterministic-MSCOCO captioning dataset. Note that the goal of this experiment is not to advance the state of the art in image captioning, but rather to explore the effectiveness of our approach on a different task, where we have strong conditioning signal and an expressive decoder in the presence of a deterministic dataset.

A brief review of the recent work on diverse text generation is given in Appendix 0.J.

We compare CPP-VAE (with the architecture described in Appendix 0.I) with a standard CVAE and with its autoregressive, non-variational counterpart222Note that CPP-VAE is agnostic to the choice of data encoder/decoder architecture. Thus, one could use more sophisticated architectures, which we leave for future research.. For quantitative evaluation, we report the ELBO (the negative log-likelihood), along with the KL-divergence and the Perplexity of the reconstructed captions on the held-out test set. We also quantitatively measure the diversity, the quality, and the context of sampled captions. To measure the context, we rely on the BLEU1 score, making sure that the sampled captions represent elements that appear in the image. For CVAE and CPP-VAE, we compute the average BLEU1 score for captions sampled per image and report the mean over the images. To measure the diversity, we measure the BLEU4 score between every pair of sampled captions per image. The smaller the BLEU4 is, the more diverse the captions are. The diversity metric is then 1-BLEU4, i.e., the higher the better. To measure the quality, we use a metric similar to that in our human motion prediction experiments, obtained by training a binary classifier to discriminate real (ground-truth) captions from fake (generated) ones. The accuracy of this classifier on the test set is inversely proportional to the quality of the generated captions. We expect a good stochastic model to have high quality and high diversity at the same time, while capturing the context of the given image. We provide qualitative examples for all the methods in Appendix 0.M. As shown in Table 5, a CVAE learns to ignore the latent variable as it can minimize the caption reconstruction loss given solely the image representation. By doing so, all the generated captions at test time are identical, despite sampling multiple latent variables. This can be further seen in the ELBO and Perplexity of the reconstructed captions. We expect a model that gets as input the captions and the image to have a much lower reconstruction loss compared to the autoregressive baseline (which gets only the image as input). However, this is not the case with CVAE, indicating that the connection between the encoder and the decoder, i.e., the latent variable, does not carry essential information about the input caption. However, the quality of the generated sample is reasonably good. This is also illustrated in the qualitative evaluations in Appendix 0.M. CPP-VAE, on the other hand, is able to effectively handle this situation by unifying the sampling of the latent variable and the conditioning, leading to diverse but high quality captions, as reflected by the ELBO of our approach in Table 5 and the qualitative results in Appendix 0.M. Additional quantitative evaluations and ablation studies for image captioning are provided in Appendix 0.K.

4 Conclusion

In this paper, we have studied the problem of conditionally generating diverse sequences with a focus on scenarios where the conditioning signal is strong enough such that an expressive decoder can generate plausible samples from it only. In standard CVAEs, the sampling of latent variables is completely independent of the conditioning signal. However, these two variables should be tied together such that the latent variable is sampled given the condition. We have addressed this problem by forcing the sampling of the latent variable to depend on the conditioning one. By making this dependency explicit, the model receives a latent variable that carries information about the condition during both training and the test time. This further prevents the network from ignoring the latent variable in the presence of a strong condition, thus enabling it to generate diverse outputs. To demonstrate the effectiveness of our approach, we have investigated two application domains: Stochastic human motion prediction and diverse image captioning. In both cases, our CPP-VAE

model was able to generate diverse and plausible samples, as well as to retain contextual information, leading to semantically-meaningful predictions. In the future, we will apply our approach to other problems that rely on strong conditions, such as image inpainting and super-resolution, for which only deterministic datasets are available.

References

  • [1] M. S. Aliakbarian, F. S. Saleh, M. Salzmann, B. Fernando, L. Petersson, and L. Andersson (2018) VIENA: a driving anticipation dataset. In

    Asian Conference on Computer Vision

    ,
    pp. 449–466. Cited by: Appendix 0.C.
  • [2] M. S. Aliakbarian, F. S. Saleh, M. Salzmann, L. Petersson, S. Gould, and A. Habibian (2019) Learning variations in human motion via mix-and-match perturbation. arXiv preprint arXiv:1908.00733. Cited by: Appendix 0.C, Appendix 0.D, §1, §2, §3.1, §3.1, §3.1, Table 1, Table 3.
  • [3] M. S. Aliakbarian, F. Saleh, B. Fernando, M. Salzmann, L. Petersson, and L. Andersson (2016) Deep action-and context-aware sequence learning for activity recognition and anticipation. arXiv preprint arXiv:1611.05520. Cited by: Appendix 0.C.
  • [4] E. Barsoum, J. Kender, and Z. Liu (2018) HP-gan: probabilistic 3d human motion prediction via gan. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    ,
    pp. 1418–1427. Cited by: Appendix 0.C, Appendix 0.D, §3.1, §3.1, Table 1, Table 3.
  • [5] S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio (2015) Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. Cited by: §2.2.
  • [6] J. Cho, M. Seo, and H. Hajishirzi (2019) Mixture content selection for diverse sequence generation. arXiv preprint arXiv:1909.01953. Cited by: Appendix 0.J.
  • [7] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio (2014)

    Empirical evaluation of gated recurrent neural networks on sequence modeling

    .
    arXiv preprint arXiv:1412.3555. Cited by: Appendix 0.H.
  • [8] L. Fang, C. Li, J. Gao, W. Dong, and C. Chen (2019) Implicit deep latent variable models for text generation. arXiv preprint arXiv:1908.11527. Cited by: Appendix 0.J.
  • [9] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik (2015) Recurrent network models for human dynamics. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4346–4354. Cited by: Appendix 0.C, §3.1, Table 4.
  • [10] P. Ghosh, J. Song, E. Aksan, and O. Hilliges (2017) Learning human motion models for long-term predictions. In 2017 International Conference on 3D Vision (3DV), pp. 458–466. Cited by: Appendix 0.C, Table 4.
  • [11] L. Gui, Y. Wang, X. Liang, and J. M. Moura (2018) Adversarial geometry-aware human motion prediction. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 786–803. Cited by: Appendix 0.C, §3.1, §3.1, Table 4.
  • [12] L. Gui, Y. Wang, D. Ramanan, and J. M. Moura (2018) Few-shot human motion prediction via meta-learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 432–450. Cited by: Appendix 0.C, §3.1.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Appendix 0.I.
  • [14] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: Appendix 0.I.
  • [15] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu (2014-07) Human3.6m: large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (7), pp. 1325–1339. Cited by: §3.1.
  • [16] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena (2016)

    Structural-rnn: deep learning on spatio-temporal graphs

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5308–5317. Cited by: Appendix 0.C, §3.1, §3.1, Table 4.
  • [17] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: Appendix 0.A, Appendix 0.H, §1, §1, §2.1.
  • [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: Appendix 0.I.
  • [19] M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh, and D. Kingma (2019) VideoFlow: a flow-based generative model for video. arXiv preprint arXiv:1903.01434. Cited by: Appendix 0.D.
  • [20] J. N. Kundu, M. Gor, and R. V. Babu (2018) BiHMP-gan: bidirectional 3d human motion prediction gan. arXiv preprint arXiv:1812.02591. Cited by: Appendix 0.C.
  • [21] B. Li, J. He, G. Neubig, T. Berg-Kirkpatrick, and Y. Yang (2019) A surprisingly effective fix for deep latent variable modeling of text. arXiv preprint arXiv:1909.00868. Cited by: Appendix 0.J.
  • [22] C. Li, Q. Zhong, D. Xie, and S. Pu (2018) Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. arXiv preprint arXiv:1804.06055. Cited by: §3.1, §3.1, Table 2.
  • [23] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §3.2.
  • [24] X. Lin and M. R. Amer (2018) Human motion modeling using dvgans. arXiv preprint arXiv:1804.10652. Cited by: Appendix 0.C.
  • [25] L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne.

    Journal of machine learning research

    9 (Nov), pp. 2579–2605.
    Cited by: §3.1.
  • [26] W. Mao, M. Liu, M. Salzmann, and H. Li (2019) Learning trajectory dependencies for human motion prediction. In ICCV, Cited by: Table 4.
  • [27] J. Martinez, M. J. Black, and J. Romero (2017) On human motion prediction using recurrent neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4674–4683. Cited by: Appendix 0.C, §3.1, §3.1, Table 4.
  • [28] D. Pavllo, C. Feichtenhofer, M. Auli, and D. Grangier (2019) Modeling human motion with quaternion-based neural networks. arXiv preprint arXiv:1901.07677. Cited by: Appendix 0.C.
  • [29] D. Pavllo, D. Grangier, and M. Auli (2018) QuaterNet: a quaternion-based recurrent model for human motion. arXiv preprint arXiv:1805.06485. Cited by: Appendix 0.C, §3.1.
  • [30] C. Rodriguez, B. Fernando, and H. Li (2018) Action anticipation by predicting future dynamic images. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 0–0. Cited by: Appendix 0.C.
  • [31] M. Sadegh Aliakbarian, F. Sadat Saleh, M. Salzmann, B. Fernando, L. Petersson, and L. Andersson (2017-10) Encouraging lstms to anticipate actions very early. In The IEEE International Conference on Computer Vision (ICCV), Cited by: Appendix 0.C.
  • [32] T. Shen, M. Ott, M. Auli, and M. Ranzato (2019) Mixture models for diverse machine translation: tricks of the trade. arXiv preprint arXiv:1902.07816. Cited by: Appendix 0.J.
  • [33] J. Walker, K. Marino, A. Gupta, and M. Hebert (2017) The pose knows: video forecasting by generating pose futures. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 3352–3361. Cited by: Appendix 0.C, Appendix 0.D, §3.1, §3.1, Table 1, Table 3.
  • [34] R. J. Williams and D. Zipser (1989) A learning algorithm for continually running fully recurrent neural networks. Neural computation 1 (2), pp. 270–280. Cited by: Appendix 0.H.
  • [35] X. Yan, A. Rastogi, R. Villegas, K. Sunkavalli, E. Shechtman, S. Hadap, E. Yumer, and H. Lee (2018) MT-vae: learning motion transformations to generate multimodal human dynamics. In European Conference on Computer Vision, pp. 276–293. Cited by: Appendix 0.C, Appendix 0.D, Figure 5, §3.1, §3.1, §3.1, Table 1, Table 3.
  • [36] D. Yang, S. Hong, Y. Jang, T. Zhao, and H. Lee (2019) Diversity-sensitive conditional generative adversarial networks. In International Conference on Learning Representations, External Links: Link Cited by: §3.1.
  • [37] Y. Yuan and K. Kitani (2019) Diverse trajectory forecasting with determinantal point processes. arXiv preprint arXiv:1907.04967. Cited by: §3.1.
  • [38] W. Zhang, M. Zhu, and K. G. Derpanis (2013) From actemes to action: a strongly-supervised representation for detailed action understanding. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2248–2255. Cited by: §3.1.

Appendix 0.A Detailed Technical Background on Evidence Lower Bound

To solve the maximum likelihood problem, we would like to have and . Using Variational Inference, we aim to approximate the true posterior with another distribution . This distribution is computed via another neural network parameterized by (called variational parameters), such that . Using such an approximation, Variational Autoencoders [17]

, or VAEs in short, are able to optimize the marginal likelihood in a tractable way. The optimization objective of the VAEs is a variational lower bound, also known as evidence lower bound, or ELBO in short. Recall that variational inference aims to find an approximation of the posterior that represents the true one. One way to do this is to minimize the divergence between the approximate and the true posterior using Kullback-Leibler divergence, or KL divergence in short. That is,

(10)

This can be seen as an expectation,

(11)

The second term above, i.e., the true posterior, can according to Bayes’ theorem, be written as

. The data distribution is independent of the latent variable , and can thus be pulled out of the expectation term,

(12)

By shifting the term to the right hand side of the above equation, we can write,

(13)

The second expectation term in the above equation is, by definition, the KL divergence between the approximate posterior and the prior distributions. Thus, this can be written as

(14)

In the above equation, is the log-likelihood of the data which we would like to optimize. is the KL divergence between the approximate and the true posterior distributions, and while not computable, from definition we know that it is non-negative. is the reconstruction loss, and is the KL divergence between the approximate posterior distribution and a prior over the latent variable. The last term can be seen as a regularizer of the latent representation. Therefore, the intractability and non-negativity of only allows us to optimize the lower bound of the log-likelihood of the data,

(15)

which we call variational or evidence lower bound (ELBO).

Appendix 0.B KL Divergence Between Two Gaussian Distributions

In our approach, the model encourages the posterior of CPP-VAE to be close to the one of the CS-VAE. In general, the KL divergence between two distributions and is defined as

(16)

In a general case, one can have a multivariate Gaussian distribution in where where and are predicted by the encoder network of the VAE. The density function of such a distribution is

(17)

Thus, the KL divergence between two multivariate Gaussians is computed as

(18)

where is the trace operation. In Eq. 18, the covariance matrix and mean corresponds to distribution and covariance matrix and mean corresponds to distribution .

(19)

Given Eq. 19, we can then compute the KL divergence of the CPP-VAE and the posterior distribution with mean and covariance matrix . Let , , the dimensionality of the latent space, and the trace of a square matrix. The loss in Eq. (7) can then be written as333See Appendix 0.B for more details on the KL divergence between two multivariate Gaussians.

(20)

Since , will be cancelled out in the term, which yields

(21)

Appendix 0.C Stochastic Human Motion Prediction Related Work

Most motion prediction methods are based on deterministic models [29, 28, 11, 16, 27, 12, 9, 10], casting motion prediction as a regression task where only one outcome is possible given the observation. While this may produce accurate predictions, it fails to reflect the stochastic nature of human motion, where multiple futures can be highly likely for a single given series of observations. Modeling stochasticity is the topic of this paper, and we therefore focus the discussion below on the other methods that have attempted to do so.

The general trend to incorporate variations in the predicted motions consists of combining information about the observed pose sequence with a random vector. In this context, two types of approaches have been studied: The techniques that directly incorporate the random vector into the RNN decoder and those that make use of an additional CVAE. In the first class of methods, [24] samples a random vector at each time step and adds it to the pose input of the RNN decoder. By relying on different random vectors at each time step, however, this strategy is prone to generating discontinuous motions. To overcome this, [20] makes use of a single random vector to generate the entire sequence. This vector is both employed to alter the initialization of the decoder and concatenated with a pose embedding at each iteration of the RNN. By relying on concatenation, these two methods contain parameters that are specific to the random vector, and thus give the model the flexibility to ignore this information. In [4], instead of using concatenation, the random vector is added to the hidden state produced by the RNN encoder. While addition prevents having parameters that are specific to the random vector, this vector is first transformed by multiplication with a learnable parameter matrix, and thus can again be zeroed out so as to remove the source of diversity, as observed in our experiments. The second category of stochastic methods introduce an additional CVAE between the RNN encoder and decoder. This allows them to learn a more meaningful transformation of the noise, combined with the conditioning variables, before passing the resulting information to the RNN decoder. In this context, [33] proposes to directly use the pose as conditioning variable. As will be shown in our experiments, while this approach is able to maintain some degree of diversity, albeit less than ours, it yields motions of lower quality because of its use of independent random vectors at each time step. Instead of perturbing the pose, the recent work [35] uses the RNN decoder hidden state as conditioning variable in the CVAE, concatenating it with the random vector. While this approach generates high-quality motions, it suffers from the fact that the CVAE decoder gives the model the flexibility to ignore the random vector, which therefore yields low-diversity outputs. Similar to [35] Mix-and-Match [2] perturbs the hidden states, but replaces the deterministic concatenation operation with a stochastic perturbation of the hidden state with the noise. Through such a perturbation, the decoder is not able decouple the noise and the condition, the phenomenon that happens in concatenation [35]. However, since the perturbation is not learned and is a non-parametric operation, the quality of generated motion is comparably low.

Generating diverse plausible motions given limited observations has many applications, especially when the motions are generated in an action-agnostic manner, as done in our work. For instance, our model can be used for human action forecasting [3, 30, 31, 1], where one seeks to anticipate the action as early as possible, where one modality utilized is human motion/poses.

Appendix 0.D Further Discussion on the Performance of Stochastic Baselines

The MT-VAE model [35] tends to ignore the random variable , thus ignoring the root of variation. As a consequence, it achieves a low diversity, much lower than ours, but produces samples of high quality, albeit almost identical (see the qualitative comparison of different baselines in the appendix). To further confirm that the MT-VAE ignores the latent variable, we performed an additional experiment where, at test time, we sampled each element of the random vector independently from instead of from the prior . This led to neither loss of quality nor increase of diversity of the generated motions. Experiments on HP-GAN model [4] evidences the limited diversity of the sampled motions despite its use of random noise during inference. Note that the authors of [4] mentioned in their paper that the random noise was added to the hidden state. Only by studying their publicly available code444https://github.com/ebarsoum/hpgan

did we understand the precise way this combination was done. In fact, the addition relies on a parametric, linear transformation of the noise vector. That is, the perturbed hidden state is obtained as

(22)

Because the parameters are learned, the model has the flexibility to ignore , which leads to the low diversity of sampled motions. Note that the authors of [4] acknowledged that, despite their best efforts, they noticed very little variation between predictions obtained with different values. Since the perturbation is ignored, however, the quality of the generated motions is high. The other baseline, Pose-Knows [33], produces motions with higher diversity than the aforementioned two baselines, but of much lower quality. The main reason behind this is that the random vectors that are concatenated to the poses at each time-step are sampled independently of each other, which translates to discontinuities in the generated motions. This problem might be mitigated by sampling the noise in a time-dependent, autoregressive manner, as in [19] for video generation. Doing so, however, goes beyond the scope of our analysis. The Mix-and-Match approach [2] yields sampled motions with higher diversity and reasonable quality. The architecture of Mix-and-Match is very close to that of MT-VAE, but replaces the deterministic concatenation operation with a stochastic perturbation of the hidden state with the noise. Through such a perturbation, the decoder is not able decouple the noise and the condition, the phenomenon that happens in concatenation. However, since the perturbation is not learned and is a non-parametric operation, the quality of the generated motion is lower than ours and of other baselines (except for Pose-Knows). We see Mix-and-Match perturbation as a workaround to the posterior collapse problem while sacrificing the quality and the context in the sampled motions. We also provide a more complete related work on diverse human motion prediction in Appendix 0.C.

Appendix 0.E Ablation Study on Different Means of Conditioning

In addition to the experiments in the main paper, we also study various designs to condition the VAE encoder and decoder. As discussed before, conditioning the VAE encoder can be safely done via concatenating two deterministic sources of information, i.e., the representations of the past and the future, since both sources are useful to compress the future motion into the latent space. In Table 6, we use both a deterministic representation of the observation, , and a stochastic one, , as a conditioning variable for the encoder. Similarly, we compare the use of either of these variables via concatenation with that of our modified reparameterization trick (Eq. 5). This shows that, to condition the decoder, reparameterization is highly effective at addressing posterior collapse. Furthermore, for the encoder, a deterministic condition works better than a stochastic one. When both the encoder and decoder are conditioned via deterministic conditioning variables, i.e., row 2 in Table 6, the model learns to ignore the latent variable and rely solely on the condition, as evidenced by the KL term tending to zero.

Encoder Conditioning Decoder Conditioning CPP-VAE’s Training KL
Concatenation () Reparameterization () 6.92
Concatenation () Concatenation () 0.04
Concatenation () Concatenation () 0.61
Concatenation () Reparameterization () 8.07
Table 6: Evaluation of various architecture designs for a CVAE. A smaller KL value, indicating posterior collapse, leads to less diversity.

Appendix 0.F Experimental Results on Penn Action Dataset

As a complementary experiment, we evaluate our approach on the Penn Action dataset, which contains 2326 sequences of 15 different actions, where for each person, 13 joints are annotated in 2D space. Most sequences have less than 50 frames and the task is to generate the next 35 frames given the first 15. Results are provided in Table 7. Note that the upper bound for the Context metric is 0.74, i.e., the classification performance given the Penn Action ground-truth motions.

ELBO (KL) Diversity Quality Context Training KL
Method (Reconstructed) (Sampled) (Sampled) (Sampled) (Reconstructed)
CPP-VAE 0.034 (6.07) 1.21 0.46 0.70 4.84
Autoregressive Counterpart 0.048 (N/A) 0.00 0.46 0.51 N/A
Table 7: Quantitative evaluation on the Penn Action dataset. Note, the diversity of 1.21 is reasonably high for normalized 2D joint positions, i.e., values between 0 and 1, normalized with the width and the height of the image.

Appendix 0.G Pseudo-code for CPP-VAE

Here, we provide the forward pass pseudo-codes for both CS-VAE and CPP-VAE.

1:procedure CS-VAE() Human motion up to time or the source text
2:      = EncodeCondition() Observed motion/source text encoder
3:     , = CS-VAE.Encode()
4:     Sample Sample from standard Gaussian
5:      = Reparameterization
6:      = CS-VAE.Decode()
7:      = DecodeCondition(, seed)
8:     return , , , , and condition CPP-VAE encoder and decoder respectively
Algorithm 1 A forward pass of CS-VAE
1:procedure CPP-VAE(, , ) Human motion from to or the target text
2:     if isTraining then
3:           = EncodeData() Future motion/target sentence encoder
4:           = Concatenate(, )
5:          , = CPP-VAE.Encode()
6:           = Our extended reparameterization
7:     else
8:           =      
9:      = CPP-VAE.Decode()
10:      = DecodeData(, seed)
11:     return , ,
Algorithm 2 A forward pass of CPP-VAE

Appendix 0.H Stochastic Human Motion Prediction Architecture

Our motion prediction model follows the architecture depicted in Fig. 2 (a). Below, we describe the architecture of each component in our model. Note that human poses, consisting of 32 joints in case of the Human3.6M dataset, are represented in 4D quaternion space. Thus, each pose at each time-step is represented with a vector of size

. All the tensor sizes described below ignores the mini-batch dimension for simplicity.

Observed motion encoder, or the CS-VAE’s motion encoder, is a single layer GRU [7] network with 1024 hidden units. If the observation sequence has the length , the observed motion encoder maps

into a single hidden representation of size

, i.e., the hidden state of the last time-step. This hidden state, , acts as the condition to the CPP-VAE’s encoder and the direct input to the CS-VAE’s encoder.

CS-VAE, similar to any variational autoencoder, has an encoder and decoder. The CS-VAE

’s encoder is a fully-connected network with ReLU non-linearities, mapping the hidden state of the motion encoder, i.e.,

, to an embedding of size

. Then, to generate the mean and standard deviation vectors, two fully connected branches are considered. These map the embedding of size

to a vector of means of size and a vector of standard deviation of size , where 128 is the length of the latent variable. Note that we apply a ReLU non-linearity to the vector of standard deviations to make sure it is non-negative. We then use the reparameterization trick [17] to sample a latent variable of size . The CS-VAE’s decoder consists of multiple fully-connected layers, mapping the latent variable to a variable of size , acting as the initial hidden state of the observed motion decoder. Note that, we apply a Tanh non-linearity to the generated hidden state to mimic the properties of a GRU hidden state.

Observed motion decoder, or the CS-VAE’s motion decoder, is similar to its motion encoder, except for the fact that it reconstructs the motion autoregressively. Additionally, it is initialized with the reconstructed hidden state, i.e., the output of CS-VAE’s decoder. The output of each GRU cell at each time-step is then fed to a fully-connected layer, mapping the GRU output to a vector of size which represents a human pose with 32 joints in 4D quaternion space. To decode the motions, we use a teacher forcing technique [34] during training. At each time-step, the network chooses with probability whether to use its own output at the previous time-step or the ground-truth pose as input. We initialize

, and decrease it linearly at each training epoch such that, after a certain number of epochs, the model becomes completely autoregressive, i.e., uses only its own output as input to the next time-step. Note, at test time, motions are generated completely autoregressively, i.e., with

.

Note, the future motion encoder and decoder have exactly the same architectures as the observed motion ones. The only difference is their input, where the future motion is represented by poses from to in a sequence. In the following, we describe the architecture of CPP-VAE for motion prediction.

CPP-VAE is a conditional variational encoder. Its encoder’s input is a representation of future motion, i.e., the last hidden state of the future motion encoder called , conditioned on . The conditioning is done by concatenation, thus, the input to the encoder is a representation of size . The CPP-VAE’s encoder, similar to CS-VAE’s encoder, maps its input representation to an embedding of size . Then, to generate the mean and standard deviation vectors, two fully connected branches are considered, mapping the embedding of size to a vector of means of size and a vector of standard deviations of size , where 128 is the length of the latent variable. Note that we apply a ReLU non-linearity to the vector of standard deviations to make sure it is non-negative. To sample the latent variable, we use our extended reparameterization trick, explained in Eq. 5. This unifies the conditioning and sampling of the latent variable. Then, similar to CS-VAE, the latent variable is fed to the CPP-VAE’s decoder, which is a fully connected network that maps the latent representation of size to a reconstructed hidden state of size for future motion. Note that, we apply a Tanh non-linearity to the generated hidden state to mimic the properties of a GRU hidden state.

Appendix 0.I Diverse Image Captioning Architecture

Our diverse image captioning model follows the architecture depicted in Fig. 2 (a). Below, we describe the architecture of each component in our model. Note, all tensor sizes described below ignore the mini-batch dimension for simplicity.

Image encoder is, here, ResNet152 [13] pretrained on ImageNet [18]. Given the encoder, the conditioning signal is a feature representation. Note that, to avoid an undesirable equilibrium in the reconstruction loss of the CS-VAE, we freeze ResNet152 during training.

CS-VAE is a standard variational autoencoder. The encoder of the CS-VAE maps the input representation of size to an embedded representation of size . Then, to generate the mean and standard deviation vectors, two fully connected branches are considered, mapping the embedding of size to a vector of means of size and a vector of standard deviations of size , where 256 is the length of the latent variable. The decoder of the CS-VAE maps the sampled latent variable of size to a representation of size . The generated representation acts as a reconstructed image representation. During training, we learn the reconstruction by computing the smoothed loss between the generated representation and the image feature (of the frozen ResNet152).

Caption encoder is a single layer GRU network with the hidden size of 1024. Each word in the caption is represented through a randomly initialized embedding layer that maps each word to a representation of size . The caption encoder gets a caption as input and generates a hidden representation of size .

CPP-VAE is a conditional variational autoencoder. As the input to its encoder, we first concatenate the image representation of size to the caption representation of size . The encoder then maps this representation to an embedded representation of size . Then, to generate the mean and standard deviation vectors, two fully connected branches are considered, mapping the embedding of size to a vector of means of size and a vector of standard deviations of size , where 256 is the length of the latent variable. To sample the latent variable, we make use of our extended reparameterization trick, explained in Eq. 5. This unifies the conditioning and sampling of the latent variable. The CPP-VAE’s decoder then maps this latent representation to a vector of size through a few fully-connected layers. We then apply a batch normalization [14] on the representation which then acts as the first token to the caption decoder.

Caption decoder is also a single layer GRU network with a hidden size of 1024. Its first token is the representation generated by the CPP-VAE’s decoder, while the rest of tokens are represented by the words in the corresponding caption. To decode the caption, we use a teacher forcing technique during training. At each time-step, the network chooses with probability whether to use its own output at the previous time-step or the ground-truth token as input. We initialize , and decrease it linearly at each training epoch such that, after a certain number of epochs, the model becomes completely autoregressive, i.e., uses only its own output as input to the next time-step. Note, at test time, motions are generated completely autoregressively, i.e., with .

Appendix 0.J Diverse Text Generation Related Work

There are a number of studies which utilize generative models for language modeling. For instance, [8] uses VAEs and LSTMs in an unconditional language modeling problem where posterior collapse may occur if the VAE is not trained well. To handle the problem of posterior collapse in language modeling, the authors of [8] try to directly match the aggregated posterior to the prior. It is discussed that this can be considered an extension of variational autoencoders with a regularization when maximizing mutual information, addressing the posterior collapse issue. VAEs are also used for language modeling in [21]. It was observed that for language modeling with VAEs it is hard to find a good balance between language modeling and representation learning. To improve the training of VAEs in such scenarios, the authors of [21] first pretrain the inference network in an autoencoder fashion such that the inference network learns a good representation of the data in a deterministic manner. Then, they train the whole VAE while considering a weight for the KL term during training. However, the second step modifies the way VAEs optimize the variational lower bound. The proposed technique also prevents the model from being trained end-to-end.

Unlike these approaches, our method considers the case of conditional sequence (text) generation where the conditioning signal (the image to be captioned in our case) is strong enough such that the caption generator can rely solely on that.

A recent work [6] proposes to separate the diversification from generation when it comes to sequence generation and language modeling. The diversification stage uses a mixture of experts (MoE) to sample different binary masks on the source sequence for diverse content selection. The generation stage uses a standard encoder-decoder model given each selected content from the source sequence. While shown to be effective in generating diverse sequences, it relies heavily on the selection part, where one need to select the information in the source that is more important to generate the target sequence. Thus, the diversity of the generated target sequence depends on the diversity of the selected parts of the source sequence. Similarly, the authors of [32] utilize MoE for the task of diverse machine translation. While this task is considered to be diverse text generation and shown to be highly successful in generating diverse translations of each source sentence, it relies on the availablity of the a stochastic dataset, i.e., having access to multiple target sequences for each source sentence during training.

While these approaches are successful in generating diverse sentences given the conditioned sequence, unlike our approach that works with deterministic datasets, they assume having access to a stochastic dataset.

Appendix 0.K Ablation Study on Diverse Image Captioning

In addition to the experiments in the main paper, in Table 8, we also evaluate our approach, as well as the autoregressive baseline and the CVAE, in terms of BLEU score for BLEU1, BLEU2, BLEU3, and BLEU4 of generated captions at test time. For the autoregressive baseline, the model generates one caption per image, thus, it is straightforward to compute the BLEU scores. For the CVAE, we consider the best BLEU score among all sampled captions according the the best matching ground-truth caption. For our model, we consider the caption from mode, i.e., the one sampled from . Although the caption sampled from CPP-VAE is not chosen based on the best match with the ground-truth caption (similar to CVAE), it shows promising quality in terms of BLEU scores. For the sake of completeness and fairness, we also provide the results with best of captions for our approach as well.

Model BLEU1 BLEU2 BLEU3 BLEU4
Autoregressive (deterministic) 0.46 0.39 0.21 0.16
Conditional VAE (best of captions) 0.44 0.38 0.20 0.17
CPP-VAE (caption from mode) 0.44 0.37 0.20 0.14
CPP-VAE (best of captions) 0.45 0.39 0.23 0.18
Table 8: BLEU scores of different orders for sampled captions from our model as well as the baselines.

The results in Table 8 clearly shows the effectiveness of sampling from mode in our approach. In this case, one could simply rely on the mode of the distribution to achieve a reasonably high quality caption.

Appendix 0.L Human Motion Prediction Qualitative Results

Here we provide a number of qualitative results on diverse human motion prediction on the Human3.6M dataset. As can be seen in Figures 7 to 12, the motions generated by our approach are diverse and natural, and mostly within the context of the observed motion.

Figure 7: Qualitative evaluation of the diversity in human motion. The first row illustrates the ground-truth motion. The first six poses of each row depict the observation (the condition) and the rest are sampled from our model. Each row is a randomly sampled motion (not cherry picked). As can be seen, all sampled motions are natural, with a smooth transition from the observed to the generated ones. The diversity increases as we increase the sequence length.
Figure 8: Additional qualitative evaluation of the diversity in human motion.
Figure 9: Additional qualitative evaluation of the diversity in human motion.
Figure 10: Additional qualitative evaluation of the diversity in human motion.
Figure 11: Additional qualitative evaluation of the diversity in human motion.
Figure 12: Additional qualitative evaluation of the diversity in human motion.

Appendix 0.M Diverse Image Captioning Qualitative Results

In this section, we provide a number of qualitative examples of captions generated by our approach. Illustrated in Figures 13 to 19, there are five different ground-truth captions per image. However, as mentioned in the paper, during training we only utilize one (i.e., training with a deterministic dataset). While captions generated by our approach are diverse, they all describe the image adequately. Note that it is a feature of our approach to generate a caption from the mode of its distribution, usually achieving a good descriptive caption. This is also evidenced by the quantitative results in Table 8 where the BLEU scores for the caption from mode is relatively high compared to other baselines. Note that for the conditional VAE, all sampled captions are identical, despite sampling multiple latent variables. Therefore, we provide only one caption for this baseline.

Figure 13: Qualitative evaluation of the diversity in generated captions. While captions generated by our approach are diverse, they all describe the image properly. The caption from mode also usually achieves a good descriptive caption.
Figure 14: Additional qualitative evaluation of the diversity in generated captions.
Figure 15: Additional qualitative evaluation of the diversity in generated captions.
Figure 16: Additional qualitative evaluation of the diversity in generated captions.
Figure 17: Additional qualitative evaluation of the diversity in generated captions.
Figure 18: Additional qualitative evaluation of the diversity in generated captions.
Figure 19: Additional qualitative evaluation of the diversity in generated captions.