1 Introduction
Probabilistic generative models describe a probability distribution over a given domain , for example a distribution over natural language sentences, natural images, or recorded waveforms.
Given a generative model from a class of possible models we are generally interested in performing one or multiple of the following operations:

Sampling. Produce a sample from . By inspecting samples or calculating a function on a set of samples we can obtain important insight into the distribution or solve decision problems.

Estimation. Given a set of iid samples from an unknown true distribution , find that best describes the true distribution.

Pointwise likelihood evaluation. Given a sample , evaluate the likelihood .
Generativeadversarial networks (GAN) in the form proposed by [10]
are an expressive class of generative models that allow exact sampling and approximate estimation. The model used in GAN is simply a feedforward neural network which receives as input a vector of random numbers, sampled, for example, from a uniform distribution. This random input is passed through each layer in the network and the final layer produces the desired output, for example, an image. Clearly, sampling from a GAN model is efficient because only one forward pass through the network is needed to produce one exact sample.
Such probabilistic feedforward neural network models were first considered in [22] and [3], here we call these models generative neural samplers
. GAN is also of this type, as is the decoder model of a variational autoencoder
[18].In the original GAN paper the authors show that it is possible to estimate neural samplers by approximate minimization of the symmetric JensenShannon divergence,
(1) 
where
denotes the KullbackLeibler divergence. The key technique used in the GAN training is that of introducing a second “
discriminator” neural networks which is optimized simultaneously. Because is a proper divergence measure between distributions this implies that the true distribution can be approximated well in case there are sufficient training samples and the model class is rich enough to represent .In this work we show that the principle of GANs is more general and we can extend the variational divergence estimation framework proposed by Nguyen et al. [25] to recover the GAN training objective and generalize it to arbitrary divergences.
More concretely, we make the following contributions over the stateoftheart:

We derive the GAN training objectives for all divergences and provide as example additional divergence functions, including the KullbackLeibler and Pearson divergences.

We simplify the saddlepoint optimization procedure of Goodfellow et al. [10] and provide a theoretical justification.

We provide experimental insight into which divergence function is suitable for estimating generative neural samplers for natural images.
2 Method
We first review the divergence estimation framework of Nguyen et al. [25] which is based on divergences. We then extend this framework from divergence estimation to model estimation.
2.1 The fdivergence Family
Statistical divergences such as the wellknown KullbackLeibler divergence measure the difference between two given probability distributions. A large class of different divergences are the so called divergences [5, 21], also known as the AliSilvey distances [1]. Given two distributions and that possess, respectively, an absolutely continuous density function and with respect to a base measure defined on the domain , we define the divergence,
(2) 
where the generator function is a convex, lowersemicontinuous function satisfying . Different choices of recover popular divergences as special cases in (2). We illustrate common choices in Table 5. See supplementary material for more divergences and plots.
2.2 Variational Estimation of divergences
Nguyen et al. [25] derive a general variational method to estimate divergences given only samples from and . We will extend their method from merely estimating a divergence for a fixed model to estimating model parameters. We call this new method variational divergence minimization (VDM) and show that the generativeadversarial training is a special case of this more general VDM framework.
For completeness, we first provide a selfcontained derivation of Nguyen et al’s divergence estimation procedure. Every convex, lowersemicontinuous function has a convex conjugate function , also known as Fenchel conjugate [14]. This function is defined as
(3) 
The function is again convex and lowersemicontinuous and the pair is dual to another in the sense that . Therefore, we can also represent as . Nguyen et al. leverage the above variational representation of in the definition of the divergence to obtain a lower bound on the divergence,
(4) 
where is an arbitrary class of functions . The above derivation yields a lower bound for two reasons: first, because of Jensen’s inequality when swapping the integration and supremum operations. Second, the class of functions may contain only a subset of all possible functions.
By taking the variation of the lower bound in (4) w.r.t. , we find that under mild conditions on [25], the bound is tight for
(5) 
where denotes the first order derivative of . This condition can serve as a guiding principle for choosing and designing the class of functions . For example, the popular reverse KullbackLeibler divergence corresponds to resulting in , see Table 5.
We list common divergences in Table 5 and provide their Fenchel conjugates and the domains in Table 6. We provide plots of the generator functions and their conjugates in the supplementary materials.
Name  Generator  

KullbackLeibler  
Reverse KL  
Pearson  
Squared Hellinger  
JensenShannon  


GAN 


2.3 Variational Divergence Minimization (VDM)
We now use the variational lower bound (4) on the divergence in order to estimate a generative model given a true distribution .
To this end, we follow the generativeadversarial approach [10] and use two neural networks, and . is our generative model, taking as input a random vector and outputting a sample of interest. We parametrize through a vector and write . is our variational function, taking as input a sample and returning a scalar. We parametrize using a vector and write .
We can learn a generative model by finding a saddlepoint of the following GAN objective function, where we minimize with respect to and maximize with respect to ,
(6) 
To optimize (6) on a given finite training data set, we approximate the expectations using minibatch samples. To approximate we sample instances without replacement from the training set. To approximate we sample instances from the current generative model .
2.4 Representation for the Variational Function
To apply the variational objective (6) for different divergences, we need to respect the domain of the conjugate functions . To this end, we assume that variational function is represented in the form and rewrite the saddle objective (6) as follows:
(7) 
where without any range constraints on the output, and is an output activation function specific to the divergence used. In Table 6
we propose suitable output activation functions for the various conjugate functions
and their domains.^{1}^{1}1Note that for numerical implementation we recommend directly implementing the scalar function robustly instead of evaluating the two functions in sequence; see Figure 1. Although the choice of is somewhat arbitrary, we choose all of them to be monotone increasing functions so that a large output corresponds to the belief of the variational function that the sample comes from the data distribution as in the GAN case; see Figure 1. It is also instructive to look at the second term in the saddle objective (7). This term is typically (except for the Pearson divergence) a decreasing function of the output favoring variational functions that output negative numbers for samples from the generator.Name  Output activation  Conjugate  

KullbackLeibler (KL)  
Reverse KL  
Pearson  
Squared Hellinger  
JensenShannon  
GAN 
2.5 Example: Univariate Mixture of Gaussians
KL  KLrev  JS  Jeffrey  Pearson  

0.2831  0.2480  0.1280  0.5705  0.6457  
0.2801  0.2415  0.1226  0.5151  0.6379  
1.0100  1.5782  1.3070  1.3218  0.5737  
1.0335  1.5624  1.2854  1.2295  0.6157  
1.8308  1.6319  1.7542  1.7034  1.9274  
1.8236  1.6403  1.7659  1.8087  1.9031 
train test  KL  KLrev  JS  Jeffrey  Pearson 

KL  0.2808  0.3423  0.1314  0.5447  0.7345 
KLrev  0.3518  0.2414  0.1228  0.5794  1.3974 
JS  0.2871  0.2760  0.1210  0.5260  0.92160 
Jeffrey  0.2869  0.2975  0.1247  0.5236  0.8849 
Pearson  0.2970  0.5466  0.1665  0.7085  0.648 
Gaussian approximation of a mixture of Gaussians. Left: optimal objectives, and the learned mean and the standard deviation:
(learned) and (best fit). Right: objective values to the true distribution for each trained model. For each divergence, the lowest objective function value is achieved by the model that was trained for this divergence.To demonstrate the properties of the different divergences and to validate the variational divergence estimation framework we perform an experiment similar to the one of [24].
Setup.
We approximate a mixture of Gaussians by learning a Gaussian distribution. We represent our model
using a linear function which receives a random and outputs , where are the two scalar parameters to be learned. For the variational function we use a neural network with two hidden layers having units each and tanh activations. We optimise the objective by using the singlestep gradient method presented in Section 3. In each step we sample batches of size each for both and and we use a stepsize of for updating both and . We compare the results to the best fit provided by the exact optimization of w.r.t. , which is feasible in this case by solving the required integrals in (2) numerically. We use (learned) and (best fit) to distinguish the parameters sets used in these two approaches.Results. The left side of Table 3 shows the optimal divergence and objective values and as well as the resulting means and standard deviations. Note that the results are in line with the lower bound property, that is, we have . There is a good correspondence between the gap in objectives and the difference between the fitted means and standard deviations. The right side of Table 3 shows the results of the following experiment: (1) we train and using a particular divergence, then (2) we estimate the divergence and retrain while keeping fixed. As expected, performs best on the divergence it was trained with. Further details showing detailed plots of the fitted Gaussians and the optimal variational functions are presented in the supplementary materials.
In summary, the above results demonstrate that when the generative model is misspecified and does not contain the true distribution, the divergence function used for estimation has a strong influence on which model is learned.
3 Algorithms for Variational Divergence Minimization (VDM)
We now discuss numerical methods to find saddle points of the objective (6). To this end, we distinguish two methods; first, the alternating method originally proposed by Goodfellow et al. [10], and second, a more direct singlestep optimization procedure.
In our variational framework, the alternating gradient method can be described as a doubleloop method; the internal loop tightens the lower bound on the divergence, whereas the outer loop improves the generator model. While the motivation for this method is plausible, in practice the choice taking a single step in the inner loop is popular. Goodfellow et al. [10] provide a local convergence guarantee.
3.1 SingleStep Gradient Method
Motivated by the success of the alternating gradient method with a single inner step, we propose a simpler algorithm shown in Algorithm 1. The algorithm differs from the original one in that there is no inner loop and the gradients with respect to and are computed in a single backpropagation.
Analysis.
Here we show that Algorithm 1 geometrically converges to a saddle point if there is a neighborhood around the saddle point in which is strongly convex in and strongly concave in . These conditions are similar to the assumptions made in [10] and can be formalized as follows:
(9) 
These assumptions are necessary except for the “strong” part in order to define the type of saddle points that are valid solutions of our variational framework. Note that although there could be many saddle points that arise from the structure of deep networks [6], they do not qualify as the solution of our variational framework under these assumptions.
For convenience, let’s define . Now the convergence of Algorithm 1 can be stated as follows (the proof is given in the supplementary material):
Theorem 1.
Suppose that there is a saddle point with a neighborhood that satisfies conditions (9). Moreover, we define and assume that in the above neighborhood, is sufficiently smooth so that there is a constant such that for any in the neighborhood of . Then using the stepsize in Algorithm 1, we have
That is, the squared norm of the gradient decreases geometrically.
3.2 Practical Considerations
Here we discuss principled extensions of the heuristic proposed in
[10] and real/fake statistics discussed by Larsen and Sønderby^{2}^{2}2http://torch.ch/blog/2015/11/13/gan.html. Furthermore we discuss practical advice that slightly deviate from the principled viewpoint.Goodfellow et al. [10] noticed that training GAN can be significantly sped up by maximizing instead of minimizing for updating the generator. In the more general GAN Algorithm (1) this means that we replace line 4 with the update
(10) 
thereby maximizing the generator output. This is not only intuitively correct but we can show that the stationary point is preserved by this change using the same argument as in [10]; we found this useful also for other divergences.
Larsen and Sønderby recommended monitoring real and fake
statistics, which are defined as the true positive and true negative rates of the variational function viewing it as a binary classifier. Since our output activation
are all monotone, we can derive similar statistics for any divergence by only changing the decision threshold. Due to the link between the density ratio and the variational function (5), the threshold lies at (see Table 6). That is, we can interpret the output of the variational function as classifying the input as a true sample if the variational function is larger than , and classifying it as a sample from the generator otherwise.We found Adam [17]
and gradient clipping to be useful especially in the large scale experiment on the LSUN dataset.
4 Experiments
We now train generative neural samplers based on VDM on the MNIST and LSUN datasets.
MNIST Digits.
We use the MNIST training data set (60,000 samples, 28by28 pixel images) to train the generator and variational function model proposed in [10] for various divergences. With
as input, the generator model has two linear layers each followed by batch normalization and ReLU activation and a final linear layer followed by the sigmoid function. The variational function
has three linear layers with exponential linear unit [4] in between. The final activation is specific to each divergence and listed in Table 6. As in [27] we use Adam with a learning rate of and update weight . We use a batchsize of 4096, sampled from the training set without replacement, and train each model for one hour. We also compare against variational autoencoders [18] with 20 latent dimensions.Results and Discussion.
We evaluate the performance using the kernel density estimation (Parzen window) approach used in
[10]. To this end, we sample 16k images from the model and estimate a Parzen window estimator using an isotropic Gaussian kernel bandwidth using three fold cross validation. The final density model is used to evaluate the average loglikelihood on the MNIST test set (10k samples). We show the results in Table 4, and some samples from our models in Figure 2.The use of the KDE approach to loglikelihood estimation has known deficiencies [31]. In particular, for the dimensionality used in MNIST () the number of model samples required to obtain accurate loglikelihood estimates is infeasibly large. We found a large variability (up to 50 nats) between multiple repetitions. As such the results are not entirely conclusive. We also trained the same KDE estimator on the MNIST training set, achieving a significantly higher holdout likelihood. However, it is reassuring to see that the model trained for the KullbackLeibler divergence indeed achieves a high holdout likelihood compared to the GAN model.
LSUN Natural Images.
Through the DCGAN work [27] the generativeadversarial approach has shown real promise in generating natural looking images. Here we use the same architecture as as in [27] and replace the GAN objective with our more general GAN objective.
We use the large scale LSUN database [34] of natural images of different categories. To illustrate the different behaviors of different divergences we train the same model on the classroom category of images, containing 168,103 images of classroom environments, rescaled and centercropped to 96by96 pixels.
Setup. We use the generator architecture and training settings proposed in DCGAN [27]. The model receives and feeds it through one linear layer and three deconvolution layers with batch normalization and ReLU activation in between. The variational function is the same as the discriminator architecture in [27]
and follows the structure of a convolutional neural network with batch normalization, exponential linear units
[4] and one final linear layer.Results. Figure 3 shows 16 random samples from neural samplers trained using GAN, KL, and squared Hellinger divergences. All three divergences produce equally realistic samples. Note that the difference in the learned distribution arise only when the generator model is not rich enough.
5 Related Work
We now discuss how our approach relates to existing work. Building generative models of real world distributions is a fundamental goal of machine learning and much related work exists. We only discuss work that applies to neural network models.
Mixture density networks [2]
are neural networks which directly regress the parameters of a finite parametric mixture model. When combined with a recurrent neural network this yields impressive generative models of handwritten text
[11].NADE [19] and RNADE [33] perform a factorization of the output using a predefined and somewhat arbitrary ordering of output dimensions. The resulting model samples one variable at a time conditioning on the entire history of past variables. These models provide tractable likelihood evaluations and compelling results but it is unclear how to select the factorization order in many applications .
Diffusion probabilistic models [29] define a target distribution as a result of a learned diffusion process which starts at a trivial known distribution. The learned model provides exact samples and approximate loglikelihood evaluations.
Noise contrastive estimation (NCE) [13]
is a method that estimates the parameters of unnormalized probabilistic models by performing nonlinear logistic regression to discriminate the data from artificially generated noise. NCE can be viewed as a special case of GAN where the discriminator is constrained to a specific form that depends on the model (logistic regression classifier) and the generator (kept fixed) is providing the artificially generated noise (see supplementary material).
The generative neural sampler models of [22] and [3] did not provide satisfactory learning methods; [22] used importance sampling and [3]expectation maximization. The main difference to GAN and to our work really is in the learning objective, which is effective and computationally inexpensive.
Variational autoencoders (VAE) [18, 28] are pairs of probabilistic encoder and decoder models which map a sample to a latent representation and back, trained using a variational Bayesian learning objective. The advantage of VAEs is in the encoder model which allows efficient inference from observation to latent representation and overall they are a compelling alternative to GANs and recent work has studied combinations of the two approaches [23]
As an alternative to the GAN training objective the work [20] and independently [7] considered the use of the kernel maximum mean discrepancy (MMD) [12, 9] as a training objective for probabilistic models. This objective is simpler to train compared to GAN models because there is no explicitly represented variational function. However, it requires the choice of a kernel function and the reported results so far seem slightly inferior compared to GAN. MMD is a particular instance of a larger class of probability metrics [30] which all take the form , where the function class is chosen in a manner specific to the divergence. Beyond MMD other popular metrics of this form are the total variation metric (also an divergence), the Wasserstein distance, and the Kolmogorov distance.
In [16] a generalisation of the GAN objective is proposed by using an alternative JensenShannon divergence
that mimics an interpolation between the KL and the reverse KL divergence and has JensenShannon as its midpoint. It can be shown that with
close to and it leads to a behavior similar the objectives resulting from the KL and reverse KL divergences (see supplementary material).6 Discussion
Generative neural samplers offer a powerful way to represent complex distributions without limiting factorizing assumptions. However, while the purely generative neural samplers as used in this paper are interesting their use is limited because after training they cannot be conditioned on observed data and thus are unable to provide inferences.
We believe that in the future the true benefits of neural samplers for representing uncertainty will be found in discriminative models and our presented methods extend readily to this case by providing additional inputs to both the generator and variational function as in the conditional GAN model [8].
Acknowledgements. We thank Ferenc Huszár for discussions on the generativeadversarial approach.
References
 Ali and Silvey [1966] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another. JRSS (B), pages 131–142, 1966.
 Bishop [1994] C. M. Bishop. Mixture density networks. Technical report, Aston University, 1994.
 Bishop et al. [1998] C. M. Bishop, M. Svensén, and C. K. I. Williams. GTM: The generative topographic mapping. Neural Computation, 10(1):215–234, 1998.
 Clevert et al. [2015] D. A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv:1511.07289, 2015.
 Csiszár and Shields [2004] I. Csiszár and P. C. Shields. Information theory and statistics: A tutorial. Foundations and Trends in Communications and Information Theory, 1:417–528, 2004.
 Dauphin et al. [2014] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in highdimensional nonconvex optimization. In NIPS, pages 2933–2941, 2014.
 Dziugaite et al. [2015] G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, pages 258–267, 2015.
 Gauthier [2014] J. Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester 2014, 2014.
 Gneiting and Raftery [2007] T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. JASA, 102(477):359–378, 2007.
 Goodfellow et al. [2014] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
 Graves [2013] A. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013.
 Gretton et al. [2007] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Schölkopf, and A. J. Smola. A kernel statistical test of independence. In NIPS, pages 585–592, 2007.
 Gutmann and Hyvärinen [2010] M. Gutmann and A. Hyvärinen. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS, pages 297–304, 2010.
 HiriartUrruty and Lemaréchal [2012] J. B. HiriartUrruty and C. Lemaréchal. Fundamentals of convex analysis. Springer, 2012.
 [15] F. Huszár. An alternative update rule for generative adversarial networks. http://www.inference.vc/analternativeupdateruleforgenerativeadversarialnetworks/.
 Huszár [2015] F. Huszár. How (not) to train your generative model: scheduled sampling, likelihood, adversary? arXiv:1511.05101, 2015.
 Kingma and Ba [2014] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
 Kingma and Welling [2013] D. P. Kingma and M. Welling. Autoencoding variational Bayes. arXiv:1402.0030, 2013.
 Larochelle and Murray [2011] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, 2011.

Li et al. [2015]
Y. Li, K. Swersky, and R. Zemel.
Generative moment matching networks.
In ICML, 2015.  Liese and Vajda [2006] F. Liese and I. Vajda. On divergences and informations in statistics and information theory. Information Theory, IEEE, 52(10):4394–4412, 2006.
 MacKay [1995] D. J. C. MacKay. Bayesian neural networks and density networks. Nucl. Instrum. Meth. A, 354(1):73–80, 1995.
 Makhzani et al. [2015] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow. Adversarial autoencoders. arXiv:1511.05644, 2015.
 Minka [2005] T. Minka. Divergence measures and message passing. Technical report, Microsoft Research, 2005.
 Nguyen et al. [2010] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. Information Theory, IEEE, 56(11):5847–5861, 2010.
 Nielsen and Nock [2014] F. Nielsen and R. Nock. On the chisquare and higherorder chi distances for approximating fdivergences. Signal Processing Letters, IEEE, 21(1):10–13, 2014.
 Radford et al. [2015] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434, 2015.

Rezende et al. [2014]
D. J. Rezende, S. Mohamed, and D. Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.
In ICML, pages 1278–1286, 2014. 
SohlDickstein et al. [2015]
J. SohlDickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli.
Deep unsupervised learning using nonequilibrium thermodynamics.
ICML, pages 2256––2265, 2015.  Sriperumbudur et al. [2010] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. Lanckriet. Hilbert space embeddings and metrics on probability measures. JMLR, 11:1517–1561, 2010.
 Theis et al. [2015] L. Theis, A. v.d. Oord, and M. Bethge. A note on the evaluation of generative models. arXiv:1511.01844, 2015.

Tokui et al. [2015]
S. Tokui, K. Oono, S. Hido, and J. Clayton.
Chainer: a nextgeneration open source framework for deep learning.
In NIPS, 2015.  Uria et al. [2013] B. Uria, I. Murray, and H. Larochelle. RNADE: The realvalued neural autoregressive densityestimator. In NIPS, pages 2175–2183, 2013.
 Yu et al. [2015] F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao. LSUN: Construction of a largescale image dataset using deep learning with humans in the loop. arXiv:1506.03365, 2015.
Appendix A Introduction
We provide additional material to support the content presented in the paper. The text is structured as follows. In Section B we present an extended list of fdivergences, corresponding generator functions and their convex conjugates. In Section C we provide the proof of Theorem 2 from Section 3. In Section D we discuss the differences between current (to our knowledge) GAN optimisation algorithms. Section E provides a proof of concept of our approach by fitting a Gaussian to a mixture of Gaussians using various divergence measures. Finally, in Section F we present the details of the network architectures used in Section 4 of the main text.
Appendix B divergences and GeneratorConjugate Pairs
In Table 5 we show an extended list of fdivergences together with their generators and the corresponding optimal variational functions . For all divergences we have , where is convex and lowersemicontinuous. Also we have which ensures that for any distribution . As shown by [10] GAN is related to the JensenShannon divergence through . The GAN generator function does not satisfy hence .
Table 6 lists the convex conjugate functions of the generator functions in Table 5, their domains, as well as the activation functions we use in the last layers of the generator networks to obtain a correct mapping of the network outputs into the domains of the conjugate functions.
The panels of Figure 4 show the generator functions and the corresponding convex conjugate functions for a variety of fdivergences.
Name  Generator  

Total variation  
KullbackLeibler  
Reverse KullbackLeibler  
Pearson  
Neyman  
Squared Hellinger  
Jeffrey  
JensenShannon  
JensenShannonweighted 

GAN 

divergence () 
Name  Output activation  Conjugate  

Total variation  
KullbackLeibler (KL)  
Reverse KL  
Pearson  
Neyman  
Squared Hellinger  
Jeffrey  
JensenShannon  
JensenShannonweighted  
GAN 

div. (, )  
div. () 
Appendix C Proof of Theorem 1
In this section we present the proof of Theorem 2 from Section 3 of the main text. For completeness, we reiterate the conditions and the theorem.
We assume that is strongly convex in and strongly concave in such that
(11)  
(12) 
These assumptions are necessary except for the “strong” part in order to define the type of saddle points that are valid solutions of our variational framework.
Given the above assumptions and notation, in Section 3 of the main text we formulate the following theorem.
Theorem 2.
Suppose that there is a saddle point with a neighborhood that satisfies conditions (11) and (12). Moreover we define and assume that in the above neighborhood, is sufficiently smooth so that there is a constant and
(13) 
for any in the neighborhood of . Then using the stepsize in Algorithm 1, we have
where is the smoothness parameter of . That is, the squared norm of the gradient decreases geometrically.
Proof.
First, note that the gradient of can be written as
Therefore we notice that,
(14) 
In other words, Algorithm 1 decreases by an amount proportional to the squared norm of .
Appendix D Related Algorithms
Due to recent interest in GAN type models, there have been attempts to derive other divergence measures and algorithms. In particular, an alternative JensenShannon divergence has been derived in [16] and a heuristic algorithm that behaves similarly to the one resulting from this new divergence has been proposed in [15].
In this section we summarise (some of) the current algorithms and show how they are related. Note that some algorithms use heuristics that do not correspond to saddle point optimisation, that is, in the corresponding maximization and minimization steps they optimise alternative objectives that do not add up to a coherent joint objective. We include a short discussion of [13] because it can be viewed as a special case of GAN.
To illustrate how the discussed algorithms work, we define the objective function
(15) 
where we introduce two scalar parameters, and , to help us highlight the differences between the algorithms shown in Table 7.
Algorithm  Maximisation in  Minimisation in 

NCE [13]  NA  
GAN1 [10]  
GAN2 [10]  
GAN3 [15] 
NoiseContrastive Estimation (NCE)
NCE [13] is a method that estimates the parameters of an unnormalised model by performing nonlinear logistic regression to discriminate between the model and artificially generated noise. To achieve this NCE casts the estimation problem as a ML estimation in a binary classification model where the data is augmented with artificially generated data. The “true” data items are labeled as positives while the artificially generated data items are labeled as negatives. The discriminant function is defined as where denotes the distribution of the artificially generated data, typically a Gaussian parameterised by the empirical mean and covariance of the true data. ML estimation in this binary classification model results in an objective that has the form (15) with amd , where the expectations are taken w.r.t. the empirical distribution of augmented data. As a result, NCE can be viewed as a special case of GAN where the generator is fixed and we only have maximise the objective w.r.t. the parameters of the discriminator. Another slight difference is that in this case the data distribution is learned through the discriminator not the generator, however, the method has many conceptual similarities to GAN.
GAN1 and GAN2
The first algorithm (GAN1) proposed in [10] performs a stochastic gradient ascentdescent on the objective with and , however, the authors point out that in practice it is more advantageous to minimise instead of , we denote this by GAN2. This is motivated by the observation that in the early stages of training when is not sufficiently well fitted, can saturate fast leading to weak gradients in . The term, however, can provide stronger gradients and leads to the same fixed point. This heuristic can be viewed as using in the maximisation step and in the minimisation step^{3}^{3}3 A somewhat similar observation regarding the artificially generated data is made in [13]: in order to have meaningful training one should choose the artificially generated data to be close the the true data, hence the choice of an ML multivariate Gaussian..
Gan3
In [15] a further heuristic for the minimisation step is proposed. Formally, it can be viewed as a combination of the minimisation steps in GAN1 and GAN2. In the proposed algorithm, the maximisation step is performed similarly (), but the minimisation is done using and . This choice is motivated by KL optimality arguments. The author makes the observation that the optimal discriminator is given by
(16) 
and thus, close to optimality, the minimisation of corresponds to the minimisation of the reverse KL divergence . This approach can be viewed as choosing and in the minimisation step.
Remarks on the Weighted JensenShannon Divergence in [16]
The GAN/variational objective corresponding to alternative JensenShannon divergence measure proposed in [16] (see JensenShannonweighted in Table 1) is
(17) 
Note that we have the correspondence. According to the definition of the variational objective, when is close to optimal then in the minimisation step the objective function is close to the chosen divergence. In this case the optimal discriminator is
(18) 
The objective in (17) vanishes when , however, when is only is close to and , it can behave similarly to the KL and reverse KL objectives, respectively. Overall, the connection between GAN3 and the optimisation of (17) can only be considered as approximate. To obtain an exact KL or reverse KL behavior one can use the corresponding variational objectives. For a simple illustration of how these divergences behave see Section 2.5 and Section E below.
Appendix E Details of the Univariate Example
We follow up on the example in Section 2.5 of the main text by presenting further details about the quality and behavior of the approximations resulting from using various divergence measures. For completeness, we reiterate the setup and then we present further results.






Setup. We approximate a mixture of Gaussian ^{4}^{4}4The plots on Figure 5 correspond to with
Comments
There are no comments yet.