On the Latent Space of Wasserstein Auto-Encoders

02/11/2018 ∙ by Paul K. Rubenstein, et al. ∙ 0

We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs). Through experimentation on synthetic and real datasets, we argue that random encoders should be preferred over deterministic encoders. We highlight the potential of WAEs for representation learning with promising results on a benchmark disentanglement task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

page 8

Code Repositories

Wasserstein-Auto-Encoders

Contains code relating to this arxiv paper https://arxiv.org/abs/1802.03761


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Unsupervised generative modeling is increasingly attracting the attention of the machine learning community. Given a collection of unlabelled data points , the ultimate goal of the task is to tune a model capable of generating sets of synthetic points which look similar to . The closely related field of unsupervised representation learning in addition aims to produce semantically meaningful representations (or features) of the data points .

There are various ways of defining the notion of similarity between two sets of data points. The most common approach assumes that both and

are sampled independently from two probability distributions

and respectively, and employ some of the known divergence measures for distributions.

Two major approaches currently dominate this field. Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014) minimize the Kullback-Leibler (KL) divergence , which is equivalent to maximizing the marginal log-likelihood of the model . Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) employ an elegant framework, commonly referred to as adversarial training, which is suitable for many different divergence measures, including (but not limited to) -divergences (Nowozin et al., 2016), 1-Wasserstein distance (Arjovsky et al., 2017), and Maximum Mean Discrepancy (MMD) (Binkowski et al., 2018).

Both approaches have their pros and cons. VAEs can both generate and encode (featurize) data points, are stable to train, and typically manage to cover all modes of the data distribution. Unfortunately, they often produce examples that are far from the true data manifold. This is especially true for structured high-dimensional datasets such as natural images, where VAEs generate blurry pictures.

GANs, on the other hand, are good at producing realistic looking examples (landing on or very close to the manifold), however they cannot featurize the points, often cover only few modes of the data distribution, and are highly unstable to train. A number of recent papers (Makhzani et al., 2016; Mescheder et al., 2017) propose different ways to blend auto-encoding architectures of VAEs with adversarial training in the hope of getting the best of both worlds.

The sample quality of VAEs was recently addressed by Wasserstein Auto-Encoders (WAE) (Tolstikhin et al., 2018). By switching the focus from the KL objective to the optimal transport distance, the authors presented an auto-encoder architecture sharing most of the nice properties of VAEs while providing samples of better quality. Importantly, WAEs still allow for adversary-free versions, resulting in a min-min formulation leading to stable training.

In this work we aim at further improving the quality of generative modeling and representation learning techniques. We will focus on adversary-free auto-encoding architectures of WAE, as we find the instability of the adversarial training to be an unfortunate obstacle when it comes to controlled reproducible experiments.

We address some of the important design choices of WAEs related to the properties of the latent space which were not discussed in (Tolstikhin et al., 2018). Based on new theoretical insights, we provide concrete algorithmic suggestions and report empirical results verifying our conclusions. Our main contributions are:

  1. We illustrate different ways in which a mismatch between the latent space dimensionality and the intrinsic data dimensionality may hurt the performance of WAEs (Section 2.1).

  2. We argue that WAEs can be made adaptive to the unknown intrinsic data dimensionality by using probabilistic (random) encoders rather than the deterministic encoders used in all experiments of Tolstikhin et al. (2018). The performance of random encoders is on par with the deterministic ones when , and potentially better when which is typical for real-world applications (Section 2.2). This suggests that random encoders should generally be preferred when using WAEs.

  3. We verify these conclusions with experiments on synthetic (newly introduced fading squares) and real world (CelebA) image datasets.

  4. We evaluate the usefulness of WAEs with random encoders for representation learning by running them on the dSprites dataset, a benchmark task in learning “disentangled representations” (Section 3). WAEs are capable of achieving simultaneously better test reconstruction and disentanglement quality as compared to the current state-of-the-art method -VAE (Higgins et al., 2017). We conclude that WAEs are a promising direction for future research in this field, because compared to VAEs and -VAEs they allow more freedom in shaping the learned latent data manifold.

We finish this section with a short description of WAEs, preliminaries, and notations.

Short introduction to Wasserstein auto-encoders

Similarly to VAEs, WAEs describe a particular way to train probabilistic latent variable models (LVMs)

. LVMs act by first sampling a code (feature) vector

from a prior distribution defined over the latent space and then mapping it to a random input point using a conditional distribution also known as the decoder. We will be mostly working with image datasets, so for simplicity we set ,, and refer to points as pictures, images, or inputs interchangeably.

Instead of minimizing the KL divergence between the LVM and the unknown data distribution as done by VAEs, WAEs aim at minimizing any optimal transport distance between them. Given any non-negative cost function between two images, WAEs minimize the following objective with respect to parameters of the decoder :

(1)

where the conditional distributions are commonly known as encoders, is the aggregated posterior distribution, is any divergence measure between two distributions over , and is a regularization coefficient. In practice and

are often parametrized with deep nets, in which case back propagation can be used with stochastic gradient descent techniques to optimize the objective.

We will only consider deterministic decoders mapping111Here is a point distribution supported on . codes to pictures . Another design choice that must be made when using a WAE is whether the encoder should map an image to a distribution over the latent space or to a single code , i.e.. We refer to the former type as random encoders and the latter as deterministic encoders.

The objective (1) is similar to that of the VAE and has two terms. The first reconstruction term aligns the encoder-decoder pair so that the encoded images can be accurately reconstructed by the decoder as measured by the cost function (we will only use the cross-entropy loss throughout).

The second regularization term is different from VAEs: it forces the aggregated posterior to match the prior distribution rather than asking point-wise posteriors to match simultaneously for all data points . To better understand the difference, note that is the distribution obtained by averaging conditional distributions for all different points drawn from the data distribution . This means that WAEs explicitly control the shape of the entire encoded dataset while VAEs constrain every input point separately.

Both existing versions of the algorithm—WAE-GAN based on adversarial training and the adversary-free WAE-MMD based on the maximum mean discrepancy, only the latter of which we use in this paper—allow for any prior distributions and encoders as long as and can be efficiently sampled. As a result, the WAE model may be easily endowed with prior knowledge about the possible structure of the dataset through the choice of .

Notation. We denote to be the mean of the encoding distribution for a given input . By we mean the th coordinate of , , and by and the marginal distributions of the prior and aggregated posteriors over the th dimension of respectively.

2 Dimension mismatch and random encoders

Random encoder                        Deterministic encoder (a) What MMD sees (b) What the encoder does (c) What the decoder does
Figure 1: Illustrations of the 2-dimensional latent space of the WAE trained on the fading squares dataset with deterministic (first row) and random (second row) encoders and a uniform prior over the box; (a) 100 points sampled from the aggregated posterior (dark red) and prior (blue); (b) same plot but with 1000 points. For the random encoder black points show data points mapped to the mean values of the encoder ; (c) decoder outputs at the corresponding points of the latent space.

In this section we consider the behaviour of WAEs with deterministic and random encoders in the case of a mismatch222 It may be possible that is supported on a low-dimensional subspace of , e.g.on a hypersphere. In this case we actually mean a mismatch between and the dimensionality of the support of . However, this is irrelevant for the priors we consider in this work, which are either Gaussian or uniform on . between the dimensionality of the latent space and the intrinsic dimensionality of the data distribution , which is informally the minimum number of parameters required to continuously parametrise the data manifold.

2.1 Dimension mismatch is harmful

What happens if a deterministic-encoder WAE is trained with a latent space of dimension that is larger than the intrinsic dimensionality ? If the encoder is continuous then the data distribution will be mapped to supported on a latent manifold of dimension at most while the regularizer in (1) will encourage the encoder to fill the latent space similarly to the prior as much as possible. This is a hard task for the encoder for the same reason that it is hard to fill the plane with a one dimensional curve.

To empirically investigate this setting, we introduce the simple synthetic fading squares dataset consisting of pixel images of centred, pixel grey squares on a black background. The value of this colour varies uniformly from (black) to (white) in steps of . The intrinsic dimensionality of this dataset is therefore , as each image in the dataset can be uniquely identified by the value of the colour of its grey square.

We trained a deterministic-encoder WAE with a uniform prior over a box in a -dimensional latent space on this dataset. Since , we can easily visualise the learned embedding of the data into the latent space and the output of the decoder across the whole latent space. This is displayed in Figure 1 (upper middle) for one such WAE.

The results of this simple experiment are illuminating about the behaviour of deterministic-encoder WAEs. The WAE is forced to reconstruct the images well, while at the same time trying to fill the latent space uniformly with the 1-dimensional data manifold. The only way to do this is by curling the manifold up in the latent space. In practice, the WAE must only fill the space to the extent that it fools the divergence measure used. The upper-left plot of Figure 1 shows that when only a mini-batch of samples is used, it is much less obvious that the aggregate posterior does not match the prior than when we visualise the entire data manifold as in the upper-middle figure. We found that larger mini-batches resulted in tighter curling of the manifold supporting , suggesting that mini-batch size may strongly affect the performance of WAEs.

We repeated the same experiment with a random-encoder WAE, for which the encoder maps an input image to a uniform distribution over the axis aligned box with centre

and side lengths . The lower-middle plot of Figure 1 shows the resulting behaviour of the learned encoder. In contrast to the deterministic-encoder WAE, the random-encoder WAE is robust to the fact that and can use one dimension to encode useful information to the decoder while filling the other with noise. That is, a single image gets mapped to a thin and tall box in the latent space. In this way, the random-encoder WAE is able to properly match the aggregated posterior to the prior distribution .

To what extent is it actually a problem that the deterministic WAE represents the data as a curved manifold in the latent space? There are two issues.

Poor sample quality:

Only a small fraction of the total volume of the latent space is covered by the deterministic encoder. Hence the decoder is only trained on this small fraction, because under the objective (1) the decoder learns to act only on the encoded training images. While it appears in this 2-dimensional toy example that the quality of decoder samples is nonetheless good everywhere, in high dimensions, such “holes” may be significantly more problematic. This is a possible explanation for the results presented in Table 1 of Section 2.2, in which we find that large latent dimensions decrease the quality of the samples produced by deterministic WAEs333 Preliminary experimentation suggests that the better quality of samples from the WAE-GAN compared to the WAE-MMD reported in (Tolstikhin et al., 2018) could be a result of the instability of the associated adversarial training. We found that when training a deterministic-encoder WAE-GAN on the fading squares dataset, the 1-D embedding of the data-manifold (the support of ) would move constantly through the support of throughout training without converging. This means that the decoder is trained on a much larger fraction of the total volume of compared to the WAE-MMD, for which the stability of training means that convergence to the final manifold (constituting a small fraction of ) is quick. .

Wrong proportion of generated images

We found that although in this simple example all of the samples generated by the deterministic-encoder WAE are of good quality, they are not produced in the correct proportions. By analogy, this would correspond to a model trained on MNIST producing too few 3s and too many 7s.

To see this, consider the mean pixel value of an image in our toy dataset. It is a 1-dimensional random variable, uniformly distributed on the interval

, where is the mean (over the whole image) in the case of a white square. We trained deterministic- and random-encoder WAEs, and for each one generated 100,000 images by sampling from the prior distribution and pushing through the learned decoder. As a baseline, we also ran this procedure using

VAEs with the same architecture. We then calculated the cumulative distribution function (CDF) of the mean pixel values and compared to the theoretical uniform distribution.

Figure 2: Deviation from the correct cumulative distribution of the mean pixel values. If images were generated using the correct frequencies, the deviations should be close to . The deterministic WAE does not meet this goal.

This is summarised in Figure 2, which displays the deviation from the theoretical CDF for each of the models trained. This shows that the deviation from the theoretical distribution for deterministic-encoder WAEs is consistently worse than for the random-encoder WAEs and VAEs, which fair comparably with one another. Note that while observing a uniform distribution here does not prove that images are generated with the correct frequencies, deviation from it does indeed indicate failure.

2.2 Random encoders with large

To test our new intuitions about the behaviour of deterministic- and random-encoder WAEs with different latent dimensions, we next consider the CelebA dataset. All experiments reported in this section used Gaussian priors and, for the random-encoder WAEs, Gaussian encoders. A fixed convolutional architecture with cross-entropy reconstruction loss was used for all experiments. To keep computation time feasible, we used small networks.

Table 1 shows the results of training 5 random- and 5 deterministic-encoder WAEs with and . We found that both deterministic- and random-encoder WAEs exhibit very similar behaviour: test reconstruction error decreases as increases, while the FID scores (Heusel et al., 2017) of random samples generated by the models after training first decrease to some minimum and then subsequently increase (lower FID scores mean better sample quality).


FID score Test reconstruction (log)
Det. Rand. Det. Rand.
32 75.0 0.7 74.8 0.5 6457.0 10.4 6445.5 7.5
64 71.6 0.8 71.1 1.0 6364.4 7.4 6365.0 5.4
128 76.8 1.3 76.8 1.2 6300.5 6.6 6309.3 9.7
256 147.6 2.3 139.8 4.2 6265.3 9.5 6262.6 6.7
Table 1: FID scores and test reconstructions for deterministic- and random-encoder WAEs trained on CelebA for various latent dimensions . Test reconstructions get better with increased dimension, while FID scores suffer for .

For deterministic encoders, this agrees with the intuition we gained from the fading squares experiment. Unable to fill the whole latent space when , the encoder leaves large holes in the latent space on which the decoder is never trained. When , these holes occupy most of the total volume, and thus most of the samples produced by the decoder from draws of the prior are poor.

For random encoders we did not expect this behaviour. Rather than automatically filling unnecessary dimensions with noise when similarly to the fading squares example, thus making accurately match

and preserving good sample quality, the random encoders would “collapse” to deterministic encoders. That is, the variances of

tend to for almost all dimensions and inputs .444More specifically, we parametrised the log-variances for each dimension and observed that the maximum of these on any mini-batch of data would typically be smaller than . The fact that this happens for most—but not all—dimensions explains why with a dimensional latent space, random WAEs produced samples with bad FID scores, but still better than deterministic WAEs: to some extent dimesionality reduction does occur, just not as much as we would hope for.

                               (a) Test reconstruction error vs regularisation

(b) FID scores vs regularisation

Figure 3: FID scores and test reconstruction errors for random-encoder WAEs with latent space dimension (first row) and (second row) for different regularisation coefficients . In each plot, the dashed/dotted black lines represent the mean s.d. for deterministic-encoder WAEs with the same (i.e. or ). The dashed/dotted green lines represent the mean s.d. for deterministic WAEs , for which the FID scores were best amongst all latent dimensions we tested. Overlaid images are (a) test reconstructions and (b) random samples coming from experiments indicated by the red circle. These plots show that when , (i) random-encoder WAEs perform comparably to deterministic WAEs and (ii) when appropriately regularised (), random encoders with high dimensional latent spaces are able to produce samples of similar quality to the deterministic encoders with the best latent space dimension. At the same time, the test reconstruction errors are lower.

Resolving variance collapse through regularization

The cause of this variance collapse is uncertain to us. Two possible explanations are a problem of optimization or a failing of the MMD as a divergence measure. We found, however, that we could effectively eliminate this issue by adding additional regularisation in the form of an penalty on the log-variances. This provides encouragement for the variances to remain closer to and thus for the encoder to be remain stochastic. More precisely, we added the following term to the objective function to be minimised:

(2)

where indexes the dimensions of the latent space , indexes the inputs in a mini-batch and is a new regularization coefficient.

We experimented with both and regularisation. We found similar qualitative behaviour with both, but found regularisation to give better performance and thus report only these results. Note that an penalty on the log-variances should encourage the encoder/decoder pair to sparsely use latent dimensions to code useful information. Indeed, the penalty will encourage sparsely many dimensions to have non-zero log-variances, and if the variance of in some dimension is always then in order for the marginal to match , we must have that for all inputs .

Using latent dimensions 32 and 256 to consider both the case of under- and over-shooting the intrinsic dimensionality of the dataset,555The fact that the deterministic WAE produced samples with better FIDs with than suggested to us that the intrinsic dimensionality of the CelebA datset is greater than 32. we trained 5 -regularized random-encoder WAEs for a variety of values for . Figure 3 shows the test reconstruction errors and FID scores obtained at the end of training.

When , regularisation can significantly improve the performance of random-encoder WAEs compared to their deterministic counterparts. In particular, tuning for the best parameter results in samples of quality comparable to deterministic encoders with the best latent dimension size, while simultaneously achieving lower test reconstruction errors. Through appropriate regularisation, random-encoder WAEs are able to adapt to the case that and still perform well.

When , regularisation does not improve test reconstruction error and FID scores and the random-encoder WAEs perform at best the same as deterministic-encoder WAEs. This makes sense: if in the deterministic case the WAE is already having to perform “lossy compression” by reducing the effective dimensionality of the dataset, then the optimal random encoder cannot do better than becoming deterministic. Thus, forcing the encoder to be more random can only harm performance.

The reader will notice that we have merely substituted the problem of searching for the “right” latent dimensionality with the problem of searching for the “right” regularisation . However, these results show that random encoders are capable of adapting to the intrinsic data dimensionality; future directions of research include exploring divergence measures other than MMD and whether the regularisation coefficients can be adaptively adjusted by the learning machine itself.

3 Learned representation and disentanglement

(a) 4-variable dSprites disentanglement task.
(b) 5 variable dSprites disentanglement task.
Figure 4: Disentanglement vs reconstruction error for -VAEs with various values of and WAEs with various regularisation coefficients (up and left is better). Note that there is no direct way to compare different values of and , but in both cases increasing the value of the hyper-parameter is correlated with increasing reconstruction error. WAEs are capable of achieving comparable or better disentanglement scores than the -VAE while simultaneously achieving lower reconstruction errors. In particular, WAE attains a maximum on the 5-variable disentanglement tast, compared to a maximum of for -VAE)

Disentangled representation learning is closely related to the more general problem of manifold learning for which auto-encoding architectures are often employed. The goal, though not precisely defined, is to learn representations of datasets such that individual coordinates in the feature space correspond to human-interpretable generative factors (also referred to as factors of variation in the literature). It is argued by Bengio et al. (2013) and Lake et al. (2017) that learning such representations is essential for significant progress in machine learning research.

3.1 A benchmark disentanglement task

Recently, Higgins et al. (2017) proposed the synthetic dSprites dataset and a metric to evaluate algorithms on their ability to learn disentangled representations. The dataset consists of -dimensional white shapes on a black background with factors of variation: shape, size, rotation, -position and -position. Samples from this dataset can be seen in the first row of Figure 5.

The metric can be used to evaluate the “level of disentanglement” in the representation learned by a model when the ground truth generative factors are known for each image, such as for the dSprites dataset. We provide here an intuition of what the metric does; see Higgins et al. (2017) for full details. Given a trained feature map from the image space to the latent space, we ask the following question. Suppose we are given two images and which have exactly one latent factor whose value is the same—say they are both the same shape, but different in size, position and rotation. By looking at the absolute values of the difference in feature vectors , is it possible to identify that it is the shape that they share in common, and not any other factor?

The idea is that if a disentangled representation has indeed been learned, then for each latent factor there should be some feature coordinate corresponding to it. The value of should then be close to zero for the latent factor that is shared, while other coordinates should on average be larger.

In the same paper, the authors introduce the -VAE, which is currently considered to be the state-of-the-art in disentangled learning algorithms. The -VAE is a modification of the original VAE in which the KL regularisation term is multiplied by a scalar hyper-parameter . The authors show that by tuning , they are able to explore a trade-off between entangled representations with low reconstruction error and disentangled representations with high reconstruction error.

We believe that WAEs have advantages over the -VAE for disentangled representation learning, which can be considered a special case of manifold learning. The results of Tolstikhin et al. (2018) show that WAEs can learn to produce better quality samples than VAEs, suggesting that in some cases WAEs are able to learn a representation of the data-manifold better than VAEs.

Further to this, we observe that the flexibility of the WAE framework allows arbitrary choices of prior and encoder distributions with only trivial changes to implementation in code, meaning that the model can be explicitly endowed with prior knowledge about the possible underlying generative factors of the dataset on which training is taking place.

In particular, priors with different topologies can be easily used with the WAE framework. For instance, a uniform distribution over the circle could be used to model the factor of rotation, for which rotations of and should be considered the same; priors with such non-trivial topologies could be combined to encode complex knowledge, such as the presence of a circular variable (rotation), a discrete variable (shape), and three uniform variables (x-position, y-position and scale) in the dSprites dataset.

While fully investigating the possible role of using different prior distributions was outside of the scope of this project, we felt that our initial results in this direction would be of sufficient scientific interest to report here.

3.2 Learning disentangled representations with WAEs

We carefully replicated the main experiment performed by Higgins et al. (2017) on the dSprites dataset, which we describe in brief here. For further details, we refer the reader to Section 4.2 and Appendix A.4 of their paper.

Following the same procedure as Higgins et al. (2017), we used a fixed fully connected architecture with the Bernoulli reconstruction loss for all experiments, with a latent space dimension of 16. We trained -VAEs for . For each of the replicates of each value of , we calculated the disentanglement metric times. From the resulting list of numbers, we discarded the bottom 50%. For each experiment, we also record the test reconstruction error on a held out part of the dataset. At the end of this procedure we had 15 pairs of numbers (test reconstruction error, disentanglement) for each of the 9 choices of .

We repeated the same process with two types of random-encoder WAEs sharing the same architectures as the -VAE for the encoder and decoder. The first type had Gaussian priors and Gaussian encoders. The second type had a uniform prior on and uniform encoder666Here the log-side-lengths were parametrised by the encoder, not the log-variances. mapping to axis-aligned boxes in . In both cases, the means of the encoders were constrained to be in the range on each dimension by tanhactivation functions. We trained such WAEs with regularisation coefficients .

Higgins et al. (2017) report their results for disentangling on only 4 of the possible 5 variables.777Although there are 5 factors of variation in the dSprites dataset, the number they reported in the Figure 6 of the main section of the paper refers to the ability of the

-VAE to provide a feature map with which a classifier can predict whether

-position, -position, scale and rotation are shared between pairs of images, while ignoring shape. This is stated in Appendix A.4 of their paper. We additionally calculated the disentangling metric on the more challenging task of distinguishing between all 5 of the latent variables.

The results of our experimentation are displayed in Figure 4. We were able to replicate their results showing that the -VAE is capable of achieving essentially on the 4-variable disentanglement task (Figure 3(a)), and that good disentanglement of -VAE comes at the expense of poorer reconstruction. On the 4-variable disentanglement task, we found that WAEs were able to attain similar levels of disentanglement while retaining significantly better reconstruction errors. Note that in this case it is not really possible to get better disentanglement than the -VAE, as it already achieves a score approaching . On the 5-variable task (Figure 3(b)), WAEs significantly outperformed -VAEs simultaneously in terms of disentanglement and reconstruction.

Amongst all of the -VAEs we trained attaining a -variable disentanglement score of , the lowest training reconstruction error was . The corresponding error for the WAEs was . The WAE with the best -variable disentanglement scored an average of across the 3 independent disentanglement calculations for this experiment with a test reconstruction of . The -VAE performing best on the -variable disentanglement task scored an average of disentanglement with a test reconstruction of . In summary, WAEs are able to outperform -VAEs simultaneously in terms of disentanglement metric and reconstruction error. Sample reconstructions from each of the aforementioned experiments are displayed in Figure 5.

(a) (b) (c) (d)
Figure 5: Row (a): Samples from the dSprites dataset; the remaining rows show reconstructions of these images by: Row (b): the Gaussian WAE with the best reconstruction error amongst those scoring on the 4-variable disentanglement metric; Row (c): the Gaussian WAE with the best score on the 5-variable disentanglement metric; Row (d): the -VAE with the best reconstruction error amongst those scoring on the 4-variable disentanglement metric. This visually confirms what is shown in Figure 4, namely that WAEs can disentangle better than -VAEs while preserving better reconstructions.

4 Conclusion and future directions

We investigated the problems that can arise when there is a mismatch between the dimension of the latent space of a WAE and the intrinsic dimension of the dataset on which it is trained. We propose to use random encoders rather than deterministic encoders to mitigate these problems. In practice, we found that additional regularisation on the variances of the encoding distributions was required. With this regularisation, random-encoder WAEs are able to adapt to the case that . We applied regularised random-encoder WAEs to a benchmark disentangled representation learning task on which good performance was observed.

One direction for future research is to investigate whether it is possible for random-encoder WAEs to automatically adapt to without any hyper-parameter tuning. Approaches to this include trying to derive theoretically justified regularisation to prevent variance collapse and to consider new divergence measures that take into account the encoding distribution variances. The results of our experiments on the disentanglement benchmark combined with the flexibility of the WAE framework indicate that WAEs have the potential to learn useful semantically meaningful representations of data.

References