Generator Reversal

07/28/2017 ∙ by Yannic Kilcher, et al. ∙ ETH Zurich 0

We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we propose instead to use more flexible code distributions. These distributions are estimated non-parametrically by reversing the generator map during training. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 14

page 15

page 17

page 18

page 19

page 20

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Continuous latent variable models have been developed and studied in statistics for almost a century, with factor analysis young1941maximum ; bartholomew1987factor

being the most paradigmatic and widespread model family. In the neural network community autoencoders have been studied classically as dimension reduction methods

baldi1989neural ; demers1993n , which encode observations into low-dimensional codes allowing approximate reconstruction via decoder or generator networks. By driving such a generator with randomness mackay1995bayesian , one obtains a probabilistic generative model. This line of research has been further developed for deep autoencoders hinton2006reducing ; vincent2010stacked and belief networks hinton2006fast ; bengio2007greedy , making use of stacked or layerwise training with a focus on pre-training representations for supervised tasks.

However, generative models have recently moved to the center stage of deep learning in their own right. Most notable is the seminal work on Generative Adversarial Networks (GAN)

Goodfellow:2014td as well as probabilistic architectures known as Variational Autoencoder (VAE) Kingma:2013tz ; Rezende:2014vm . Here, the focus has moved away from density estimation and towards generative models that – informally speaking – produce samples that are perceptually indistinguishable from samples generated by nature. This is particularly relevant in the context of high-dimensional signals such as images, speech, or text.

Generative models like GANs, VAEs and others typically define a generative model via a deterministic generative mechanism or generator , , parametrized by . They are often implemented as a deep neural network (DNN), which is hooked up to a code distribution , to induce a distribution . It is known that under mild regularity conditions, by a suitable choice of generator, any can be obtained from an arbitrary fixed kallenberg2006foundations . Relying on the power and flexibility of DNNs, this has led to the view that code distributions should be simple and a priori fixed, e.g. . As shown in arjovsky2017towards for DNN generators, is a countable union of manifolds of dimension though, which may pose challenges, if . Whereas a current line of research addresses this via alternative (non-MLE or KL-based) discrepancy measures between distributions Dziugaite:2015wd ; nowozin2016f ; arjovsky2017wasserstein , we investigate an orthogonal direction:

Claim 1.

It is advantageous to increase the modeling power of a generative model, not only via , but by using more flexible prior code distributions .

Another benefit of this approach is the ability to reveal richer structure (e.g. multimodality) in latent space via , a view which is also supported by evidence on using more powerful posterior distributions mescheder2017adversarial .

Deep generative models have proven to be notoriously hard to train. The use of complementing recognition networks that operate in a bottom-up fashion and provide approximations to posteriors, has been a common trait of many models from the Helmholtz machine dayan1995helmholtz to recent VAEs. In GANs this is avoided by pairing-up the generator with a discriminator in a minimax game. However, GANs can be brittle to train Radford:2015wf and the training signal provided by the discriminator can become weak (vanishing gradients) or misleading (mode collapse). We would like to retain benefits of both approaches and propose an alternative method that does not make use of a recognition network and instead relies on the generator itself to compute an approximate inverse map such that .

Claim 2.

The generator implicitly defines an approximate inverse, which can be computed with reasonable effort using gradient descent and without the need to co-train a recognition network. We call this approach generator reversal.

Our generator reversal is similar in spirit to kindermann1990inversion , but their intent differs as they use this technique as a tool to visualize the information processing capability of a neural network. Unlike previous works that require the transfer function to be bijective Baird:jr ; Rippel:2013uq , our approach does not strictly have this requirement, although this could still be imposed by carefully selecting the architecture of the network as shown in dinh2016density ; arjovsky2017wasserstein .

Note that, if the above argument holds, we can easily find latent vectors

corresponding to given observations . This then induces an empirical distribution of "natural" codes, which we can combine with the first argument.

Claim 3.

Using generator reversal we can model natural code distributions in a non-parametric manner, for instance, using kernel density estimation. This can then be incorporated into an improved learning method for GANs.

We have presented our main ideas as claims above and the rest of the paper will make these ideas specific and support them with evidence.

2 Generator Reversal

2.1 Gradient–Based Reversal

Let us begin with making Claim 2 more precise. Given a data point , we aim to compute some approximate code such that . We do so by simple gradient descent, starting from some random initialization for (see Algorithm 1). We will come back to the question of how to define the termination condition.

0:  Data point

, loss function

, initial value
1:  Initialize
2:  repeat
3:      {run generator}
4:     

{backpropagate error}

5:  until termination condition
5:  latent code
Algorithm 1 Generator Reversal

A key question is whether good (low loss) codevectors exist for data points . First of all, whenever was actually generated by , then surely we know that a perfect, zero-loss pre-image exists. Of course finding it exactly would require the exact inverse function of the generator process but our experiments demonstrate that, in practice, an approximate answer is sufficient.

Secondly, if is in the training data, then as is trained to mimic the true distribution, it would be suboptimal if any such would not have a suitable pre-image. We thus conjecture that learning a good generator will also improve the quality of the generator reversal, at least for points of interest (generated or data). Note that we do not explicitly train the generator to produce pre-images that would further optimize the training objective. This would require backpropagating through the reversal process which is certainly possible and would likely yield further improvements.

2.2 Random Network Experiments

It is very hard to give quality guarantees for the approximations obtained via generator reversal. Here, we provide experimental evidence by showing that even a DNN generator with random weights can provide reasonable pre-images for data samples. As we argued above, we believe that actual training of will improve the quality of pre-images, so this is in a way a worst case scenario.

Examples for three different image data sets are shown in Figure 2. Here we show the average reconstruction error as a function of the number of gradient update steps. We observe that the error decreases steadily as the reconstruction progresses and reaches a low value very quickly. We also show randomly selected reconstructed samples in Figure 2, which reflect the fast decrease in terms of reconstruction error. After only 5 update steps, one can already recognize the general outline of the picture. This is perhaps even more surprising considering that these results were obtained using a generator with completely random weights. A similar finding was also reported in he2016powerful which constructed deep visualizations using untrained neural networks initialized with random weights.


Figure 1: Reconstruction loss in generator networks with random weights.
Figure 2: Reconstruction quality using generator networks with random weights. The left column is the original image, followed by reconstructions after 5, 20 and 400 steps.

2.3 Analysis

Let us provide a simple insight into generator reversal that ensures that locally, gradient descent will provide the correct pre-image.

Proposition 1.

Let be locally invertible 111Note that this is a less restrictive assumption than the diffeomorphism property required in arjovsky2017towards . For a point , the reconstruction problem with the -loss is locally convex at .

Proof.

Let denote the Jacobian of . We prove the result stated above by showing that the Hessian at is positive semidefinite.

Since is assumed to be locally invertible, then and the Hessian is therefore positive semidefinite. ∎

Note that one could also add an regularizer to obtain a locally strongly-convex function.

3 Code Vector Distribution

3.1 Latent Code Structure

With generator reversal at hand, we can now investigate the empirical distribution of code vectors for data samples. Basically, what we want to provide evidence for, is that a (GAN-)trained generator retains interesting structure in the latent code space, while this is not the case for a generator with random weights. Moreover, we also want to stress, that there is far more structure in the latent space of the trained model than what an isotropic Gaussian is able to capture. We use data with class labels to be able to assess how much of the class information is preserved in the latent codes. Note, however, that we have not used the labels in any way during training.

Results on MNIST are shown in Figure 4, where we have projected latent representations from a trained network to a 2-dimensional space using t-SNE maaten2008visualizing and then colored according to their respective class label. Compared to an untrained network, there is a clear emergence of structure in the latent space which we can attribute to the GAN training procedure.

We believe these results to be paradigmatic in that they show that we should not assume that the latent space density always varies smoothly in all directions, nor that latent space directions are necessarily picking up on independent modes of variation. Even if we hard-wire such simplistic assumptions into the model (as we do in GANs), generator reversal reveals that the true code distribution of natural data can be quite different. This hints at a mismatch between modeling assumptions and natural data distributions, which can be most directly overcome by increasing the model flexibility with regard to . By off-loading some of the modeling complexity from the mechanism to the driving distribution, the generator’s challenge becomes easier, as anisotropic and clustered distributions do not have to be converted back into multivariate normal densities. This way, one can learn more meaningful representations and ultimately better generative models.

3.2 Kernel Density Estimation

We propose to use a non-parametric method to estimate a density in the space of latent code vectors. We use kernel density estimation (KDE) as it is commonly used in the literature (see e.g. nowozin2016f ). We can then use the resulting density to drive the generation of new examples, controlling the generalization of the model vs. pure memorization by the bandwidth of the kernel. More concretely, given a translation-invariant kernel , bandwidth , and a sample of codes

, we can approximate the probability density at point

as . In this work, we chose a kernel density estimator with an RBF kernel. Note that sampling from this distribution is straightforward. This approach can also be combined with mixed sampling from a broader background distribution in order to prevent overfitting to the training data.

4 Improved GAN Training

What we have done so far is to introduce a technique to estimate a flexible prior over the latent codes given by a generator . This technique revealed a clear structure in the latent space as suggested by the results shown in Figure 4. Yet this information is typically ignored by the standard training procedure for GANs, which we review below.

Standard GAN training.

The main idea behind GANs is to pit two networks - a generator and a discriminator - against each other. The generator takes as input a random noise vector drawn from , and outputs a sample . The other network, the discriminator attempts to differentiate the generator sample from samples drawn from the true distribution. Formally, the two networks play the following minimax game:

(1)

In Goodfellow:2014td , the authors suggested minimizing Eq. 1 by alternatingly optimizing the parameters of and

using minibatch stochastic gradient descent (SGD) training. Under the assumptions that

and have enough capacity, and that at each step of SGD, the discriminator is allowed to reach its optimum then the generator will match the data distribution. However these assumptions often do not hold in practice and consequently many problems arise when training GANs. A common problem encountered in practice is the imbalance between the discriminator and the generator. To overcome this difficulty, one can either make the discriminator weaker or make the generator stronger. Most existing work focusing on weakening the discriminator Salimans:2016wg ; sonderby2016amortised has faced difficulties to find the right level by which one should "dumb-down" the discriminator. Here we pursue the direction of making the generator stronger by being able to stay closer to the true data distribution.

GAN training with a flexible prior.

We now describe a novel training procedure for GANs that uses the reversal technique presented in Section 2 to continually reconstruct latent representations of data samples. By doing this, we obtain the real data’s empirical latent distribution. We will later demonstrate that this yields significant speeds up at training time. The method we propose is detailed in Algorithm 2. It requires constructing and continuously updating a prior durng GAN training. Although this might seem like a costly procedure, we will demonstrate that relatively few gradient steps in the generator reversal process are required at each step of the GAN training loop.

1:  for  do
2:     Estimate using the generator reversal process
3:     Sample minibatch of noise samples from
4:     Sample minibatch of examples from
5:     Update the discriminator and generator using SGD
6:  end for
Algorithm 2 Improved GAN training

Ideally, we would like to perform enough steps such that the obtained latent representations is informative enough for the training procedure. This inevitably raises the question of what would be a good heuristic to determine the quality required from generator reversal. Perhaps the most naive way would be to use a fixed number of steps or fix a threshold on the reconstruction loss. However these two approaches might waste computation as they disregard the GAN objective presented in Eq. 

1. We instead argue for a different criterion:

Claim 4.

The minimum level of accuracy required for the obtained latent representations to be useful is determined by the possibility to differentiate - in the latent space - generated samples from real data samples.

Our claim closely follow the underlying principle of the minimax game where the discriminator tries to differentiate between real samples drawn from and samples generated by the generator .

Following this principle we suggest using a statistical test to determine if, in the latent space, samples from the generator are drawn from the same distribution as . As demonstrated by our experiments, this will naturally lead to crude reconstructions at the beginning of training, while increasing the reconstruction quality as the generator generates samples more similar to the true samples. In the following, we detail our choice of a specific statistical test.

Two-sample test.

The two-sample problem, also known as homogeneity testing, is a classic problem in statistics. Given two i.i.d. samples and , one would like to test the hypothesis

(i.e. the two samples are drawn from different distributions) against the null hypothesis

. This is equivalent to testing whether for a metric on the space of probability measures. In this work, we choose to be the Maximum Mean Discrepancy (MMD) Gretton:2012wt

which has been applied to the setting of unsupervised learning with GANs in 

Sutherland:2016wi .

Given two pairs of independent random variables

and and a characteristic kernel Fukumizu:2007wz with associated RHKS , the squared population MMD is defined as

(2)

As shown in Gretton:2012wt

various unbiased estimators of

can be computed. We will here use a quadratic estimator defined as

(3)

The time complexity for computing this estimator is quadratic in the number of samples , but linear time approximations exist Gretton:2012wt . In this work, we choose a Gaussian (RBF) kernel for , since it can be efficiently computed and fulfils the requirements of a characteristic kernel.

MMD as a stopping criterion.

We now detail how MMD is used in Algorithm 2 as a stopping criterion. Following claim 4, what we would like is to check whether the distribution produced by the generator network in the latent space is a good fit for the latent distribution of the true data . As evaluating the whole distribution would be too expensive, we instead estimate the MMD distance on minibatches of samples from and using the estimator in Eq. 3 after each step of generator reversal. Once we can confidently distinguish the reconstructed latent representations of the true data from those of the generated data, we terminate the reversal procedure.

Figure 4 shows the maximum mean discrepancy between the obtained latent representations of a data mini-batch and the obtained latent representations of a mini-batch of generated samples as a function of reconstruction steps taken. The increase in MMD matches our expectations: with better reconstructions, we can more confidently distinguish the data’s latent representations from those of the generated samples. Further, the graphs show that the MMD continues to increase even after a large number of steps.

Figure 3: Generator reversal on a sample of 1024 MNIST digits. Projections of data points with an untrained (left) and a fully trained GAN (right). Colors represent the respective class labels. The ratios of between-cluster distances to within-cluster distances are 0.1 (left) and 1.9 (right).
Figure 4: MMD in the latent space between a mini-batch of real data samples and a mini-batch of samples produced by the generator.

Experimental results.

Our experimental setup closely follows popular setups in GAN research and is detailed in the Appendix.

Figure 5 highlights two important results from our experiments: In the top row we plot the inception score Salimans:2016wg on a test set over the course of training. As can be seen, our method reaches higher scores more quickly and retains this advantage until the end of training.

In the bottom row we plot the number of reconstruction steps that are taken for each training step. Recall that this is determined by the MMD two-sample test. As expected, the number of steps needed to produce useful reconstructions increases during training, if only slightly. More details and results of the training procedure can be found in the Appendix.

Visual results.

Figures 6 and 7 compare samples generated from a standard GAN and a KDE GAN as training progresses and at the end of training. The images in Figure 6 clearly demonstrate the faster progress of KDE GAN which generates visually better samples in the early stage of the optimization procedure. The samples generated by the fully trained model in Figure 7 also appear to be of superior quality.

Manifold traveral.

Given two seeding images from the dataset, we linearly interpolate between their latent representations, which we obtain by generator reversal. The resulting images in Figure 

8 show that we learn a smooth transition function between the data space and the latent space. This indicates that the better performance of KDE GAN is not at the detriment of overfitting to the training data, since the model has a notion of semantically neighbouring images. This is also confirmed by our neighborhood exploration experiments, given in the Appendix.

Figure 5:

Top row: Test set inception score over the course of the training procedure. Bottom row: The number of reconstruction steps taken by the generator reversal process in the inner loop until the MMD threshold is reached. Shaded areas denote one standard deviation around the mean, averaged over five training runs.

(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 6: Image samples from the generative model at the beginning of training. Vanilla DCGAN samples on the left, our KDE GAN on the right. Each row is sampled after 50 steps of training, starting at step 100.
(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 7: Dataset images (left) and samples from fully trained vanilla DCGAN (middle) and KDE GAN (right).

Figure 8: Manifold traversal with KDE GAN. We linearly interpolate between the latent codes of two given seeding images (far left and right).

5 Related Work

VAEs vs GANs.

Another example of a popular technique to train a neural generative model is the Variational Auto-encoder approach proposed by Kingma:2013tz ; Rezende:2014vm which explicitly models the data distribution in the latent space and learn its parameters using an encoder neural network to minimize the reconstruction error. A common criticism against VAEs is that they tend to generate blurry samples, a problem which is typically alleviated by GANs.

However, GANs lack an efficient inference mechanism. This problem was addressed by some recent work, e.g. Che:2016wq ; Dumoulin:2016td ; donahue2016adversarial which focus on training a separate encoder network in order to map a sample from the data space to its latent representation. Our goal is however different as our procedure is used to estimate a more flexible prior over the latent space. The importance of using an appropriate prior for GAN models has also been discussed in han2016alternating which suggested to infer the continuous latent factors in order to maximize the data log-likelihood. However this approach still makes use of a simple fixed prior distribution over the latent factors and does not use the inferred latent factors to construct a prior as suggested by our approach.

As discussed throughout this paper, standard GAN training requires to strike a good balance between the two networks. The work of Metz:2016wg introduced a method to stabilize training by defining the generator objective with respect to an unrolled optimization of the discriminator. They show that a gradient update procedure can be effectively included into the back-propagation step of another gradient update procedure. Although our approach is fundamentally different, it does also stabilize training and can also be implemented end-to-end as part of the back-propagation step.

Mmd.

The Maximum Mean Discrepancy measure Gretton:2012wt is a particular instance of an integral probability metric muller1997integral

, a class of metrics on probability measures that includes a wide variety of known divergences, such as the Kolmogorov Distance, the Total Variation Distance and the Wasserstein Distance. For an excellent discussion of the relation of MMD to other integral probability metrics as well as other commonly used divergence measures, such as the Kullback-Leibler Divergence, we refer the reader to 

Sriperumbudur:2010wp . The connection between MMD and GANs have also been explored in the literature. Both dziugaite2015training and li2015generative proposed an approximation to adversarial learning that replaces the discriminator with the MMD criterion in the data space.

6 Conclusion

We presented a novel approach to estimate a flexible prior over the latent codes given by a generator . This is achieved through a reversal technique that continually reconstruct latent representations of data samples and use these reconstructions to construct a prior over the latent codes. We empirically demonstrated that this reversal technique yields several benefits including: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.

References

  • (1) Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In NIPS 2016 Workshop on Adversarial Training. In review for ICLR, volume 2016, 2017.
  • (2) Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
  • (3) L Baird, D Smalenberger, and S Ingkiriwang. One-step neural network inversion with PDF learning and emulation. In International Joint Conference on Neural Networks, pages 966–971. IEEE, 2005.
  • (4) Pierre Baldi and Kurt Hornik.

    Neural networks and principal component analysis: Learning from examples without local minima.

    Neural networks, 2(1):53–58, 1989.
  • (5) D. J. Bartholomew. Latent Variable Models and Factor Analysis. London: Griffin, 1987.
  • (6) Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19:153, 2007.
  • (7) Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode Regularized Generative Adversarial Networks. December 2016.
  • (8) Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889–904, 1995.
  • (9) David DeMers and GW Cottrell. n–linear dimensionality reduction. Adv. Neural Inform. Process. Sys, 5:580–587, 1993.
  • (10) Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
  • (11) Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
  • (12) Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially Learned Inference. June 2016.
  • (13) Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks via Maximum Mean Discrepancy optimization. May 2015.
  • (14) Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906, 2015.
  • (15) Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, and Bernhard Schölkopf. Kernel Measures of Conditional Dependence. NIPS, 2007.
  • (16) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. pages 2672–2680, 2014.
  • (17) Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A Kernel Two-Sample Test.

    Journal of Machine Learning Research ()

    , 13(Mar):723–773, 2012.
  • (18) Tian Han, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. Alternating back-propagation for generator network. arXiv preprint arXiv:1606.08571, 2016.
  • (19) Kun He, Yan Wang, and John Hopcroft. A powerful generative model using random weights for the deep image representation. In Advances In Neural Information Processing Systems, pages 631–639, 2016.
  • (20) Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
  • (21) Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006.
  • (22) Olav Kallenberg. Foundations of modern probability. Springer Science & Business Media, 2006.
  • (23) Joerg Kindermann and Alexander Linden. Inversion of neural networks by gradient descent. Parallel computing, 14(3):277–286, 1990.
  • (24) Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv.org, December 2013.
  • (25) Yujia Li, Kevin Swersky, and Richard S Zemel.

    Generative moment matching networks.

    In ICML, pages 1718–1727, 2015.
  • (26) Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
  • (27) David JC MacKay. Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 354(1):73–80, 1995.
  • (28) Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722, 2017.
  • (29) Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled Generative Adversarial Networks. arXiv.org, November 2016.
  • (30) Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(02):429–443, 1997.
  • (31) Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pages 271–279, 2016.
  • (32) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. November 2015.
  • (33) D J Rezende, S Mohamed, and D Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv.org, 2014.
  • (34) Oren Rippel and Ryan Prescott Adams. High-Dimensional Probability Estimation with Deep Density Models. CoRR, 2013.
  • (35) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved Techniques for Training GANs. June 2016.
  • (36) Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
  • (37) Bharath K Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, and Gert R G Lanckriet. Hilbert Space Embeddings and Metrics on Probability Measures. Journal of Machine Learning Research (), 2010.
  • (38) Dougal J Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, and Arthur Gretton. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy. November 2016.
  • (39) Tijmen Tieleman and Geoffrey Hinton.

    Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.

    COURSERA: Neural networks for machine learning, 4(2), 2012.
  • (40) Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.

    Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.

    Journal of Machine Learning Research, 11(Dec):3371–3408, 2010.
  • (41) Gale Young. Maximum likelihood estimation and factor analysis. Psychometrika, 6(1):49–53, 1941.

Appendix A Detailed Experiment Setup.

Our experimental setup closely follows popular setups in GAN research in order to facilitate reproducibility and enable qualitative comparisons of results. Our network architectures are as follows:

The generator samples from a latent space of dimension 200, which is fed through a padded deconvolution to form an initial intermediate representation of

, which is then fed through three layers of deconvolutions with 256, 128 and 64 filters, followed by a last deconvolution to get to the desired output size and channels. For the CelebA dataset, which we cropped and rescaled to image size , we employ an additional deconvolutional layer with 512 filters between the first and second layer.

The discriminator consists of three layers of convolutional layers with 256, 128 and 64 filters, followed by a fully connected layer and a sigmoid classifier. Again, for the CelebA dataset, the discriminator is augmented by an additional convolutional layer using 512 filters.

Both the generator and the discriminator use

filters with a stride of

in order to up- and downscale the representations, respectively. The generator employs ReLU non-linearities, except for the last layer, which uses hyperbolic tangent. The discriminator uses Leaky ReLU non-linearities with a leak of

, which is standard in the GAN literature.

We use RMSProp[39] with a step size of and mini-batches of size 100 for optimization for both the generator and discriminator. Both networks are updated once per iteration.

For the generator reversal process, we use a learning rate of and an MMD threshold of with a kernel bandwidth of

. The initial noise vectors are sampled from a normal distribution with

.

We train until we can no longer see any significant qualitative imrovement in the generated images or any quantitative improvement in the inception score. This amounts to 3 epochs on MNIST, 10 epochs on SVHN, 50 epochs on CIFAR10 and 5 epochs on CelebA.

We crop the images of the CelebA dataset to a size of pixels, after which we resize them to pixels. Our images from MNIST, SVHN and CIFAR10 retain their original sizes of , and pixels, respectively.

As for the effect of generator reversal on wall clock time, in our TensorFlow implementation, an iteration performing 5, 20 or 50 generator reversal steps takes about 1.9, 4.9 or 9.8 times as long as an iteration without generator reversal, but there is still a lot of room for optimization.

Appendix B Additional Training Metrics

Figure 9: Top row: Discriminator mean accuracy over the course of the training procedure. Bottom row: Training loss of the generator as training progresses. Shaded areas denote one standard deviation around the mean over five repetitions.

Figure 9 shows the training loss of the generator as well as the mean accuracy of the discriminator over the course of training. Both metrics indicate that our training procedure improves the generator, such that it achieves a better loss as well as manages to fool the discriminator more easily.

Appendix C Additional Evaluation Metrics

Nearest-neighbor test.

We evaluate the generalization properties of our model using the common procedure of examining nearest-neighbors in the data space [16, 29]. Specifically, we take samples from the training distribution and compute the nearest-neighbor images in pixel space from the training data. We then report the average pairwise distance between each sample and its neighbors. The results shown in Table 1 show that the distance is the same as the standard GAN approach which indicates that our model has similar generalization properties and thus does not simply overfit to the training data.

normal kde
MNIST
SVHN
CIFAR10
Table 1: Mean pixel-wise euclidean distance (and standard deviation) from the generated samples to their nearest neighbor in the training set.

Holdout Likelihood.

We sample a set of images from a holdout test dataset and compute their latent representation using Generator Reversal. We then evaluate the likelihood of the latent codes under the distribution obtained from KDE over the latent space. The results shown in Figure 10 demonstrate that our approach achieves a very similar holdout likelihood compared to the GAN model. The fact that data that has not been seen during training is assigned substantial likelihood is evidence for the generalization capacity of our approach and indicates that it is not a simple memorization of the training data. An interesting direction for future work would be to investigate the role of the kernel bandwidth parameter in the tradeoff between memorization and generalization.

Figure 10: Holdout likelihood of a test set on MNIST, SVHN and CIFAR10, computed in the latent code space, while varying the kernel bandwidth used in the KDE computation.

Appendix D Additional Experimental Results

Recovery of dropped modes.

We create the MNIST1000 dataset by randomly concatenating 3 digits from the MNIST dataset. This yields a dataset with 1000 modes. To make the task more challenging, we bias sampling in favor of the high digits in a linear fashion (a 9 is 20 times more likely to be sampled than a 1). We would like to compare the different training procedures and their ability to recover from mode dropping. We simulate mode dropping by first restricting the dataset to samples that only consist of digits 6, 7, 8 and 9. We train the model until convergence on this limited dataset, then we provide the full dataset and continue training, again until convergence. The final inception score is a measure of how well each model is able to recover previously dropped modes. We compare Generator Reversal to Vanilla GAN as well as to BiGAN[11], since we hypothesize that the inclusion of an encoder network might help recover from mode dropping. The results are shown in Table 2. As can be seen, all models are able to recover from the initial mode dropping, but both Vanilla GAN and BiGAN are not able to reach the same score compared to when they are trained on the full dataset from the beginning. This is an indication that some of the dropped modes have not been recovered. In comparison, Generator Reversal is able to reach a comparable inception score as when trained on the full dataset, which indicates that it can successfully recover from mode dropping.

Restricted Dataset Full Dataset after restriction Full Dataset
Vanilla GAN 107 196 248
BiGAN 102 200 265
Generator Reversal 99 289 291
Table 2: Inception scores of GAN, BiGAN and GAN trained using Generator Reversal on the MNIST1000 dataset after restriction to only 3-digit numbers made up of digits 6, 7, 8 and 9. Scores on the full dataset are provided for comparison.

More Trained Samples.

Figures 11121314 and 15 show more samples from the fully trained models of vanilla DCGAN and our KDE GAN.

(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 11: Dataset images (left) and samples from fully trained vanilla DCGAN (middle) and KDE GAN (right).
(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 12: Dataset images (left) and samples from fully trained vanilla DCGAN (middle) and KDE GAN (right).
(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 13: Dataset images (left) and samples from fully trained vanilla DCGAN (middle) and KDE GAN (right).
(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 14: Dataset images (left) and samples from fully trained vanilla DCGAN (middle) and KDE GAN (right).
(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 15: Dataset images (left) and samples from fully trained vanilla DCGAN (middle) and KDE GAN (right).

More Beginning-Of-Training Samples.

Figures 161718 and 19 show more samples from the fully trained models of vanilla DCGAN and our KDE GAN.

(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 16: Image samples from the generative model at the beginning of training. Vanilla DCGAN samples on the left, our KDE GAN on the right. Each row is sampled after 50 steps of training, starting at step 100.
(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 17: Image samples from the generative model at the beginning of training. Vanilla DCGAN samples on the left, our KDE GAN on the right. Each row is sampled after 50 steps of training, starting at step 100.
(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 18: Image samples from the generative model at the beginning of training. Vanilla DCGAN samples on the left, our KDE GAN on the right. Each row is sampled after 50 steps of training, starting at step 100.
(a) mnist
(b) svhn
(c) cifar10
(d) celeba64
Figure 19: Image samples from the generative model at the beginning of training. Vanilla DCGAN samples on the left, our KDE GAN on the right. Each row is sampled after 50 steps of training, starting at step 100.

More Manifold Traversals.

We perform more manifold traversal experiments and show the results in Figure 20.

Figure 20: Manifold traversal with KDE GAN. We linearly interpolate between the latent codes of two given seeding images (far left and right).

Latent Neighborhood Exploration.

We now perform an experiment similar to manifold traversal but targeted at checking the local structure of the latent manifold around a datapoint. This is achieved by picking a random seeding image from the dataset, obtaining its latent representation via generator reversal and generating images by sampling from increasing neighborhood sizes centered at , i.e. . The results give more evidence of the ability of the KDE GAN model to learn a neighborhood structure. Displayed are images for low in Figure 21, medium in Figure 22, high in Figure 23 and overly high in Figure 24.

Figure 21: KDE GAN neighborhood exploration for a small neighborhood around a seeding image (show on the left).

Figure 22: KDE GAN neighborhood exploration for a medium sized neighborhood around a seeding image (show on the left).

Figure 23: KDE GAN neighborhood exploration for a large neighborhood around a seeding image (show on the left).

Figure 24: KDE GAN neighborhood exploration for an overly large neighborhood around a seeding image (show on the left).