1 Introduction
Generative adversarial networks (GANs) (Goodfellow et al., 2014) excel at constructing realistic images (Radford et al., 2016; Ledig et al., 2016; Isola et al., 2017; Arjovsky et al., 2017; Berthelot et al., 2017) and text (Gulrajani et al., 2017). In GAN learning, a discriminator network guides the learning of another, generative network. This procedure can be considered as a game between the generator which constructs synthetic data and the discriminator which separates synthetic data from training set data (Goodfellow, 2017). The generator’s goal is to construct data which the discriminator cannot tell apart from training set data. GAN convergence points are local Nash equilibria. At these local Nash equilibria neither the discriminator nor the generator can locally improve its objective.
Despite their recent successes, GANs have several problems. First (I), until recently it was not clear if in general gradientbased GAN learning could converge to one of the local Nash equilibria (Salimans et al., 2016; Goodfellow, 2014; Goodfellow et al., 2014). It is even possible to construct counterexamples (Goodfellow, 2017)
. Second (II), GANs suffer from “mode collapsing”, where the model generates samples only in certain regions which are called modes. While these modes contain realistic samples, the variety is low and only a few prototypes are generated. Mode collapsing is less likely if the generator is trained with batch normalization, since the network is bound to create a certain variance among its generated samples within one batch
(Radford et al., 2016; Chintala et al., 2016). However batch normalization introduces fluctuations of normalizing constants which can be harmful (Klambauer et al., 2017; Goodfellow, 2017). To avoid mode collapsing without batch normalization, several methods have been proposed (Che et al., 2017; Metz et al., 2016; Salimans et al., 2016). Third (III), GANs cannot assure that the density of training samples is correctly modeled by the generator. The discriminator only tells the generator whether a region is more likely to contain samples from the training set or synthetic samples. Therefore the discriminator can only distinguish the support of the model distribution from the support of the target distribution. Beyond matching the support of distributions, GANs with proper objectives may learn to locally align model and target densities via averaging over many training examples. On a global scale, however, GANs fail to equalize model and target densities. The discriminator does not inform the generator globally where probability mass is missing. Consequently, standard GANs are not assured to capture the global sample density and are prone to neglect large parts of the target distribution. The next paragraph gives an example of this. Fourth (IV), the discriminator of GANs may forget previous modeling errors of the generator which then may reappear, a property that leads to oscillatory behavior instead of convergence
(Goodfellow, 2017).Recently, problem (I) was solved by proving that GAN learning does indeed converge when discriminator and generator are learned using a two timescale learning rule (Heusel et al., 2017). Convergence means that the expected SGDgradient of both the discriminator objective and the generator objective are zero. Thus, neither the generator nor the discriminator can locally improve, i.e., learning has reached a local Nash equilibrium. However, convergence alone does not guarantee good generative performance. It is possible to converge to suboptimal solutions which are local Nash equilibria. Mode collapse is a special case of a local Nash equilibrium associated with suboptimal generative performance. For example, assume a two mode real world distribution where one mode contains too few and the other mode too many generator samples. If no real world samples are between these two distinct modes, then the discriminator penalizes to move generated samples outside the modes. Therefore the generated samples cannot be correctly distributed over the modes. Thus, standard GANs cannot capture the global sample density such that the resulting generators are prone to neglect large parts of the real world distribution. A more detailed example is listed in the Appendix in Section A.1.
In this paper, we introduce a novel GAN model, the Coulomb GAN, which has only one Nash equilibrium. We are later going to show that this Nash equilibrium is optimal, i.e., the model distribution matches the target distribution. We propose Coulomb GANs to avoid the GAN shortcoming (II) to (IV) by using a potential field created by point charges analogously to the electric field in physics. The next section will introduce the idea of learning in a potential field and prove that its only solution is optimal. We will then show how learning the discriminator and generator works in a Coulomb GAN and discuss the assumptions needed for our optimality proof. In Section 3 we will then see that the Coulomb GAN does indeed work well in practice and that the samples it produces have very large variability and appear to capture the original distribution very well.
Related Work.
Several GAN approaches have been suggested for bringing the target and model distributions in alignment using not just local discriminator information: Geometric GANs combine samples via a linear support vector machine which uses the discriminator outputs as samples, therefore they are much more robust to mode collapsing
(Lim & Ye, 2017). EnergyBased GANs (Zhao et al., 2017) and their later improvement BEGANs (Berthelot et al., 2017) optimize an energy landscape based on autoencoders. McGANs match mean and covariance of synthetic and target data, therefore are more suited than standard GANs to approximate the target distribution (Mroueh et al., 2017). In a similar fashion, Generative Moment Matching Networks
(Li et al., 2015) and MMD nets (Dziugaite et al., 2015)directly optimize a generator network to match a training distribution by using a loss function based on the maximum mean discrepancy (MMD) criterion
(Gretton et al., 2012). These approaches were later expanded to include an MMD criterion with learnable kernels and discriminators (Li et al., 2017). The MMD criterion that these later approaches optimize has a form similar to the energy function that Coulomb GANs optimize (cf. Eq. (33)). However, all MMD approaches end up using either Gaussian or Laplace kernels, which are not guaranteed to find the optimal solution where the model distribution matches the target distribution. In contrast, the Plummer kernel which is employed in this work has been shown to lead to the optimal solution (Hochreiter & Obermayer, 2005). We show that even a simplified version of the Plummer kernel, the lowdimensional Plummer kernel, ensures that gradient descent convergences to the optimal solution as stated by Theorem 1. Furthermore, most MMD GAN approaches use the MMD directly as loss function though the number of possible samples in a minibatch is limited. Therefore MMD approaches face a sampling problem in highdimensional spaces. The Coulomb GAN instead learns a discriminator network that gradually improves its approximation of the potential field via learning on many minibatches. The discriminator network also tracks the slowly changing generator distribution during learning. Most importantly however, our approach is, to the best of our knowledge, the first one for which optimality, i.e., ability to perfectly learn a target distribution, can be proved.The use of the Coulomb potential for learning is not new. Coulomb Potential Learning was proposed to store arbitrary many patterns in a potential field with perfect recall and without spurious patterns (Perrone & Cooper, 1995). Another related work is the Potential Support Vector Machine (PSVM), which minimizes Coulomb potential differences (Hochreiter & Mozer, 2001; Hochreiter et al., 2003). Hochreiter & Obermayer (2005)
also used a potential function based on Plummer kernels for optimal unsupervised learning, on which we base our work on Coulomb GANs.
2 Coulomb GANs
2.1 General Considerations on GANs
We assume data samples for a model density and a target density . The goal of GAN learning is to modify the model in a way to obtain . We define the difference of densities which should be pushed toward zero for all during learning. In the GAN setting, the discriminator is a function that learns to discriminate between generated and target samples and predicts how likely it is that is sampled from the target distribution. In conventional GANs, is usually optimized to approximate the probability of seeing a target sample, or or some similar function. The generator is a continuous function which maps some
dimensional random variable
into the space of target samples.is typically sampled from a multivariate Gaussian or Uniform distribution.
In order to improve the generator, a GAN uses the gradient of the discriminator with respect to the discriminator input for learning. The objective of the generator is a scalar function , therefore the gradient of the objective function is just a scaled version of the gradient which would then propagate further to the parameters of . This gradient tells the generator in which direction becomes larger, i.e., in which direction the ratio of target examples increases. The generator changes slightly so that is now mapped to a new , moving the sample generated by a little bit towards the direction where was larger, i.e., where target examples were more likely. However, and its derivative only take into account the local neighborhood of , since regions of the sample space that are distant from do not have much influence on . Regions of data space that have strong support in but not in will not be noticed by the generator via discriminator gradients. The restriction to local environments hampers GAN learning significantly (Arjovsky & Bottou, 2017; Arjovsky et al., 2017).
The theoretical analysis of GAN learning can be done at three different levels: (1) in the space of distributions and regardless of the fact that is realized by and , (2) in the space of functions and regardless of the fact that and
are typically realized by a parametric form, i.e., as neural networks, or (3) in the space of the parameters of
and . Goodfellow et al. (2014) use (1) to prove convergence of GAN learning in their Proposition 2 in a hypothetical scenario where the learning algorithm operates by making small, local moves in space. In order to see that level (1) and (2) should both be understood as hypothetical scenarios, remember that in all practical implementations, can only be altered implicitly by making small changes to the generator function G, which in turn can only be changed implicitly by small steps in its parameters. Even if we assume that the mapping from a distribution to the generator that induced it exists and is unique, this mapping from to the space of is not continuous. To see this, consider changing a distribution to a new distribution by moving a small amount of its density to an isolated region in space where has no support. Let’s further assume this region has distance to any other regions of support of . By letting , the distance between and becomes smaller, yet the distance between the inducing generator functions and (e.g. using the supremum norm on bounded functions) will not tend to zero because for at least one function input we have: . Because of this, we need to go further than the distribution space when analyzing GAN learning. In practice, when learning GANs, we are restricted to small steps in parameter space, which in turn lead to small steps in function space and finally to small steps in distribution space. But not all small steps in distribution space can be realized this way as shown in the example above. This causes local Nash equilibria in the function space, because even though in distribution space it would be easy to escape by making small steps, such a step would require very large changes in function space and is thus not realizable. In this paper we show that Coulomb GANs do not exhibit any local Nash equilibria in the space of the functions and . To the best of our knowledge, this is the first formulation of GAN learning that can guarantee this property. Of course, Coulomb GANs are learned as parametrized neural networks, and as we will discuss in Subsection 2.4.2, Coulomb GANs are not immune to the usual issues that arise from parameter learning, such as over and underfitting, which can cause local Nash Equilibria due to a bad choice of parameters.2.2 From Conventional GANs to Potentials
If the density or approaches a Dirac deltadistribution, gradients vanish since the density approaches zero except for the exact location of data points. Similarly, electric point charges are often represented by Dirac deltadistributions, however the electric potential created by a point charge has influence everywhere in the space, not just locally. The electric potential (Coulomb potential) created by the point charge is , where is the distance to the location of and is the dielectric constant. Motivated by this electric potential, we introduce a similar concept for GAN learning: Instead of the difference of densities , we rather consider a potential function defined as
(1) 
with some kernel which defines the influence of a point at onto a point at . The crucial advantage of potentials is that each point can influence each other point in space if is chosen properly. If we minimize this potential we are at the same time minimizing the difference of densities : For all kernels it holds that if for all then for all . We must still show that (i) for all then for all , and even more importantly, (ii) whether a gradient optimization of leads to for all . This is not the case for every kernel. Indeed only for particular kernels gradient optimization of leads to for all , that is, for all (Hochreiter & Obermayer, 2005) (see also Theorem 1 below). An example for such a kernel is the one leading to the Coulomb potential from above, where for . As we will see in the following, the ability to have samples that influence each other over long distances, like charges in a Coulomb potential, will lead to GANs with a single, optimal Nash equilibrium.
2.3 GANs as Electrical Fields
For Coulomb GANs, the generator objective is derived from electrical field dynamics: real and generated samples generate a potential field, where samples of the same class (real vs. generated) repel each other, but attract samples of the opposite class. However, real data points are fixed in space, so the only samples that can move are the generated ones. In turn, the gradient of the potential with respect to the input samples creates a vector field in the space of samples. The generator can move its samples along the forces generated by this field. Such a field is depicted in Figure 1. The discriminator learns to predict the potential function, in order to approximate the current potential landscape of all samples, not just the ones in the current minibatch. Meanwhile, the generator learns to distribute its samples across the whole field in such a way that the energy is minimized, thus naturally avoids mode collapse and covering the whole region of support of the data. The energy is minimal and equal to zero only if all potential differences are zero and the model distribution is equal to the target distribution.
Within an electrostatic field, the strength of the force on one particle depends on its distance to other particles and their charges. If left to move freely, the particles will organize themselves into a constellation where all forces equal out and no potential differences are present. For continuous charge distributions, the potential field is constant without potential differences if charges no longer move since forces are equaled out. If the potential field is constant, then the difference of densities is constant, too. Otherwise the potential field would have local bumps. The same behavior is modeled within our Coulomb GAN, except that real and generated samples replace the positive and negative particles, respectively, and that the real data points remain fixed. Only the generated samples are allowed to move freely, in order to minimize . The generated samples are attracted by real samples, so they move towards them. At the same time, generated samples should repel each other, so they do not clump together, which would lead to mode collapsing.
Analogously to electrostatics, the potential from Eq. (1) gives rise to a field . and to an energy function . The field applies a force on charges at which pushes the charges toward lower energy constellations. Ultimately, the Coulomb GAN aims to make the potential zero everywhere via the field , which is the negative gradient of . For proper kernels , it can be shown that (i) can be pushed to zero via its negative gradient given by the field and (ii) that for all implies for all , therefore, for all (Hochreiter & Obermayer, 2005) (see also Theorem 1 below).
2.3.1 Learning Process
During learning we do not change or directly. Instead, the location to which the random variable is mapped changes to a new location . For the GAN optimization dynamics, we assume that generator samples can move freely, which is ensured by a sufficiently complex generator. Importantly, generator samples originating from random variables do neither disappear nor are they newly created but are conserved. This conservation is expressed by the continuity equation (Schwartz, 1972) that describes how the difference between distributions changes as the particles are moving along the field, i.e., how moving samples during the learning process changes our densities:
(2) 
for sample density difference and unit charges that move with “velocity” . The continuity equation is crucial as it establishes the connection between moving samples and changing the generator density and thereby . The sign function of the velocity indicates whether positive or negative charges are present at . The divergence operator “” determines whether samples move toward or outward of for a given field. Basically, the continuity equation says that if the generator density increases, then generator samples must flow into the region and if the generator density decreases, they flow outwards. We assume that differently charged particles cancel each other. If generator samples are moved away from a location then is increasing while is decreasing when generator samples are moved toward . The continuity equation is also obtained as a first order ODE to move particles in a potential field (Dembo & Zeitouni, 1988), therefore describes the dynamics how the densities are changing. We obtain
(3) 
The density difference indicates how many samples are locally available for being moved. At each local minimum and local maximum of we obtain . Using the product rule for the divergence operator, at points that are minima or maxima, Eq. (3) reduces to
(4) 
In order to ensure that converges to zero, it is necessary and sufficient that , where , as this condition ensures the uniform decrease of the maximal absolute density differences .
2.3.2 Choice of Kernel
As discussed before, the choice of kernel is crucial for Coulomb GANs. The dimensional Coulomb kernel and the dimensional Plummer kernel lead to (i) that is pushed to zero via the field it creates and (ii) that for all implies for all , therefore, for all (Hochreiter & Obermayer, 2005). Thus, gradient learning with these kernels has been proved to converge to an optimal solution. However, both the dimensional Coulomb and the dimensional Plummer kernel lead to numerical instabilities if is large. Therefore the Coulomb potential for the Coulomb GAN was constructed by a lowdimensional Plummer kernel with parameters and :
(5) 
The original Plummer kernel is obtained with . The resulting field and potential energy is
(6)  
(7) 
The next theorem states that for freely moving generated samples, converges to zero, that is, , when using this potential function .
Theorem 1 (Convergence with lowdimensional Plummer kernel).
For , , and the densities and equalize over time when minimizing energy with the lowdimensional Plummer kernel by gradient descent. The convergence is faster for larger .
Proof.
See Section A.2. ∎
2.4 Definition of the Coulomb GAN
The Coulomb GAN minimizes the electric potential energy from Eq. (6
) using a stochastic gradient descent based approach using minibatches. Appendix Section
A.4 contains the equations for the Coulomb potential, field, and energy in this case. Generator samples are obtained by drawing random numbers and transforming them into outputs . Each minibatch also includes real world samples . This gives rise to a minibatch specific potential, where in Eq. (5) we use and replace the expectations by empirical means using the drawn samples:(8) 
It is tempting to have a generator network that directly minimizes this potential between generated and training set points. In fact, we show that
is an unbiased estimate for
in Appendix Section A.4. However, the estimate has very high variance: for example, if a minibatch fails to sample training data from an existing mode, the field would drive all generated samples that have been generated at this mode to move elsewhere. The high variance has to be counteracted by extremely low learning rates, which makes learning infeasible in practice, as confirmed by initial experiments. Our solution to this problem is to have a network that generalizes over the minibatch specific potentials: each minibatch contains different generator samples
for and real world samples for , they create a batchspecific potential . The goal of the discriminator is to learn , i.e., the potential averaged over many minibatches. Thus the discriminator function fulfills a similar role as other typical GAN discriminator functions, i.e., it discriminates between real and generated data such that for any point in space , should be greater than zero if the and smaller than zero otherwise. In particular also indicates, via its gradient and its potential properties, directions toward regions where training set samples are predominant and where generator samples are predominant.The generator in turn tries to move all of its samples according to the vector field into areas where generator samples are missing and training set samples are predominant. The generator minimizes the approximated energy as predicted by the discriminator. The loss for the discriminator and for the generator are given by:
(9)  
(10) 
Where , i.e., a distribution where each point of support both of the generator and the real world distribution is surrounded with a Gaussian ball of width similar to Bishop et al. (1998), in order to overcome the problem that the generator distribution is only a submanifold of . These loss functions cause the approximated potential values that are negative are pushed toward zero. Finally, the Coulomb GAN, like all other GANs, consists of two parts: a generator to generate model samples, and a discriminator that provides its learning signal. Without a discriminator, our would be very similar to GMMNs (Li et al., 2015), as can be seen in Eq. (33
), but with an optimal Kernel specifically tailored to the problem of estimating differences between probability distributions.
We use each minibatch only for one update of the discriminator and the generator. It is important to note that the discriminator uses each sample in the mini batch twice: once as a point to generate the minibatch specific potential , and once as a point in space for the evaluation of the potential and its approximation . Using each sample twice is done for performance reasons, but not strictly necessary: the discriminator could learn the potential field by sampling points that lie between generator and real samples as in Gulrajani et al. (2017), but we are mainly interested in correct predictions in the vicinity of generator samples. Pseudocode for the learning algorithm is detailed in Algorithm 1 in the appendix.
2.4.1 Optimality of the Solution
Convergence of the GAN learning process was proved for a two timescales update rule by Heusel et al. (2017). A local Nash equilibrium is a pair of generator and discriminator that fulfills the two conditions
for some neighborhoods and . We show in the following Theorem 2 that for Coulomb GANs every local Nash equilibrium necessarily is identical to the unique global Nash equilibrium. In other words, any equilibrium point of the Coulomb GAN that is found to be local optimal has to be the one global Nash equilibrium as the minimization of the energy in Eq. (33) leads to a single, global optimum at .
Theorem 2 (Optimal Solution).
If the pair is a local Nash equilibrium for the Coulomb GAN objectives, then it is the global Nash equilibrium, and no other local Nash equilibria exist, and has output distribution .
Proof.
See Appendix Section A.3. ∎
2.4.2 Coulomb GANs in Practice
To implement GANs in practice, we need learnable models for and . We assume that our models for and are continuously differentiable with respect to their parameters and inputs. Toward this end, GANs are typically implemented as neural networks optimized by (some variant of) gradient descent. Thus we may not find the optimal or
, since neural networks may suffer from capacity or optimization issues. Recent research indicates that the effect of local minima in deep learning vanishes with increasing depth
(Dauphin et al., 2014; Choromanska et al., 2015; Kawaguchi, 2016), such that this limitation becomes less restrictive as our ability to train deep networks grows thanks to hardware and optimization improvements.The main problem with learning Coulomb GANs is to approximate the potential function , which is a complex function in a highdimensional space, since the potential can be very nonlinear and nonsmooth. When learning the discriminator, we must ensure that enough data is sampled and averaged over. We already lessened the nonlinear function problem by using a lowdimensional Plummer kernel. But still, this kernel can introduce large nonlinearities if samples are close to each other. It is crucial that the discriminator learns slow enough to accurately estimate the potential function which is induced by the current generator. The generator, in turn, must be even slower since it must be tracked by the discriminator. These approximation problems are supposed to be tackled by the research community in near future, which would enable optimal GAN learning.
The formulation of GAN learning as a potential field naturally solves the mode collapsing issue: the example described in Section A.1, where a normal GAN cannot get out of a local Nash equilibria is not a converged solution for the Coulomb GAN: If all probability mass of the generator lies in one of the modes, then both attracting forces from realworld samples located at the other mode as well as repelling forces from the overrepresented generator mode will act upon the generator until it generates samples at the other mode as well.
3 Experiments
In all of our experiments, we used a lowdimensional Plummer Kernel of dimensionality . This kernel both gave best computational performance and has low risk of running into numerical issues. We used a batch size of 128. To evaluate the quality of a GAN, the FID metric as proposed by Heusel et al. (2017) was calculated by using 50k samples drawn from the generator, while the training set statistics were calculated using the whole training set. We compare to BEGAN (Berthelot et al., 2017), DCGAN (Radford et al., 2016) and WGANGP (Gulrajani et al., 2017) both in their original version as well as when using the twotimescale updaterule (TTUR), using the settings from Heusel et al. (2017). We additionally compare to MMDGAN (Li et al., 2017), which is conceptually very similar to the Coulomb GAN, but uses a Gaussian Kernel instead of the Plummer Kernel. We use the datasetspecific settings recommended in (Li et al., 2017) and report the highest FID score over the course of training. All images shown in this paper were produced with a random seed and not cherry picked. The implementation used for these experiments is available online^{1}^{1}1 filled.in.after.review . The appendix Section A.5 contains an additional toy example demonstrating that Coulomb GANs do not suffer from mode collapse when fitting a simple Gaussian Mixture of 25 components.
3.1 Image Datasets
To demonstrate the ability of the Coulomb GAN to learn distributions in high dimensional spaces, we trained a Coulomb GAN on several popular image data sets: The cropped and centered images of celebrities from the Largescale CelebFaces Attributes (“CelebA”) data set (Liu et al., 2015), the LSUN bedrooms data set consists of over 3 million 64x64 pixel images of the bedrooms category of the large scale image database LSUN (Yu et al., 2015) as well as the CIFAR10 data set. For these experiments, we used the DCGAN architecture (Radford et al., 2016) with a few modifications: our convolutional kernels all have a kernel size of 5x5, our random seed that serves as input to the generator has fewer dimensions: 32 for CelebA and LSUN bedrooms, and 16 for CIFAR10. Furthermore, the discriminator uses twice as many feature channels in each layer as in the DCGAN architecture. For the Plummer kernel, was set to 1. We used the Adam optimizer with a learning rate of for the generator and for the discriminator. To improve convergence performance, we used the
output activation function
(LeCun et al., 1998). For regularization we used an L2 weight decay term with a weighting factor of . Learning was stopped by monitoring the FID metric (Heusel et al., 2017). Once learning plateaus, we scaled the learning rate down by a factor of 10 and let it continue once more until the FID plateaus. The results are reported in Table 3(b), and generated images can be seen in Figure 2 and in the Appendix in Section A.7. Coulomb GANs tend to outperform standard GAN approaches like BEGAN and DCGAN, but are outperformed by the Improved Wasserstein GAN. However it is important to note that the Improved Wasserstein GAN used a more advanced network architecture based on ResNet blocks (Gulrajani et al., 2017), which we could not replicate due to runtime constraints. Overall, the low FID of Coulomb GANs stem from the fact that the images show a wide variety of different samples. E.g. on CelebA, Coulomb GAN exhibit a very wide variety of faces, backgrounds, eye colors and orientations.To further investigate how much variation the samples generated by the Coulomb GAN contains, we followed the advice of Arora and Zhang (Arora & Zhang, 2017) to estimate the support size of the generator’s distribution by checking how large a sample from the generator must be before we start generating duplicates. We were able to generate duplicates with a probability of around 50 % when using samples of size 1024, which indicates that the support size learned by the Coulomb GAN would be around 1M. This is a strong indication that the Coulomb GAN was able to spread out its samples over the whole target distribution. A depiction is included in Figure 3, which also shows the nearest neighbor in the training set of the generated images, confirming that the Coulomb GAN does not just memorize training images.
3.2 Language Modeling
We repeated the experiments from Gulrajani et al. (2017), where Improved Wasserstein GANs (WGANGP) were trained to produce text samples after being trained on the Google Billion Word data set (Chelba et al., 2013), using the same network architecture as in the original publication. We use the JensenShannondivergence on 4grams and 6grams as an evaluation criterion. The results are summarized in Table 3(a).


4 Conclusion
The Coulomb GAN is a generative adversarial network with strong theoretical guarantees. Our theoretical results show that the Coulomb GAN will be able to approximate the real distribution perfectly if the networks have sufficient capacity and training does not get stuck in local minima. Our results show that the potential field used by the Coulomb GAN far outperforms MMD based approaches due to its lowdimensional Plummer kernel, which is better suited for modeling probability density functions, and is very effective at eliminating the mode collapse problem in GANs. This is because our loss function forces the generated samples to occupy different regions of the learned distribution. In practice, we have found that Coulomb GANs are able to produce a wide range of different samples. However, in our experience, this sometimes leads to a small number of generated samples that are nonsensical interpolations of existing data modes. While these are sometimes also present in other GAN models
(Radford et al., 2016), we found that our model produces such images at a slightly higher rate. This issue might be solved by finding better ways of learning the discriminator, as learning the correct potential field is crucial for the Coulomb GAN’s performance. We also observed that increasing the capacity of the discriminator seems to always increase the generative performance. We thus hypothesize that the largest issue in learning Coulomb GANs is that the discriminator needs to approximate the potential field very well in a highdimensional space. In summary, instead of directly optimizing a criterion based on local differences of densities which can exhibit many local minima, Coulomb GANs are based on a potential field that has no local minima. The potential field is created by point charges in an analogy to electric field in physics. We have proved that if learning converges then it converges to the optimal solution if the samples can be moved freely. We showed that Coulomb GANs avoid mode collapsing, model the target distribution more truthfully than standard GANs, and do not overlook high probability regions of the target distribution.References
 Arjovsky & Bottou (2017) M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. International Conference on Learning Representations (ICLR), 2017.

Arjovsky et al. (2017)
M. Arjovsky, S. Chintala, and L. Bottou.
Wasserstein generative adversarial networks.
Proceedings of the 34th International Conference on Machine Learning (ICML)
, 2017.  Arora & Zhang (2017) S. Arora and Y. Zhang. Do GANs actually learn the distribution? An empirical study. ArXiv eprints, 2017.
 Berthelot et al. (2017) D. Berthelot, T. Schumm, and L. Metz. BEGAN: boundary equilibrium generative adversarial networks. ArXiv eprints, abs/1703.10717, 2017.
 Bishop et al. (1998) C. Bishop, M. Svensén, and C. Williams. GTM: the generative topographic mapping. Neural computation, 10(1):215–234, 1998.
 Che et al. (2017) T. Che, Y. Li, A. P. Jacob, Y. Bengio, and W. Li. Mode regularized generative adversarial networks. International Conference on Learning Representations (ICLR), 2017.
 Chelba et al. (2013) C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word benchmark for measuring progress in statistical language modeling. ArXiv eprints, 2013.
 Chintala et al. (2016) S. Chintala, E. Denton, M. Arjovsky, and M. Mathieu. How to train a GAN? Tips and tricks to make GANs work. https://github.com/soumith/ganhacks, 2016.
 Choromanska et al. (2015) A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun. The loss surfaces of multilayer networks. Journal of Machine Learning Research, 38:192–204, 2015.
 Clevert et al. (2016) D.A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). International Conference on Learning Representations (ICLR), 2016.
 Dauphin et al. (2014) Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in highdimensional nonconvex optimization. In Advances in Neural Information Processing Systems 27, pp. 2933–2941, 2014.
 Dembo & Zeitouni (1988) A. Dembo and O. Zeitouni. General potential surfaces and neural networks. Phys. Rev. A, 37:2134–2143, 1988. doi: 10.1103/PhysRevA.37.2134.

Dziugaite et al. (2015)
G. K. Dziugaite, D. M. Roy, and Z. Ghahramani.
Training generative neural networks via maximum mean discrepancy
optimization.
In
Proceedings of the ThirtyFirst Conference on Uncertainty in Artificial Intelligence (UAI’15)
, pp. 258–267, 2015.  Efthimiou & Frye (2014) C. J. Efthimiou and C. Frye. Spherical Harmonics in Dimensions. World Scientific, 2014.
 Goodfellow et al. (2014) I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pp. 2672–2680, 2014.
 Goodfellow (2014) I. J. Goodfellow. On distinguishability criteria for estimating generative models. ArXiv eprints, 2014.
 Goodfellow (2017) I. J. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. ArXiv eprints, 2017.
 Gretton et al. (2012) A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A kernel twosample test. J. Mach. Learn. Res., 13:723–773, 2012.
 Gulrajani et al. (2017) I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of Wasserstein GANs. ArXiv eprints, 2017.
 Gutmann & Hyvärinen (2012) M. U. Gutmann and A. Hyvärinen. Noisecontrastive estimation of unnormalized statistical models, with applications to natural image statistics. J. Mach. Learn. Res., 13(1):307–361, 2012.
 Heusel et al. (2017) M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, G. Klambauer, and S. Hochreiter. GANs trained by a two timescale update rule converge to a Nash equilibrium. ArXiv eprints, 2017.

Hochreiter & Mozer (2001)
S. Hochreiter and M. C. Mozer.
Coulomb classifiers: Reinterpreting SVMs as electrostatic systems.
Technical Report CUCS92101, Department of Computer Science, University of Colorado, Boulder, 2001.  Hochreiter & Obermayer (2005) S. Hochreiter and K. Obermayer. Optimal kernels for unsupervised learning. In Proceedings of the IEEE International Joint Conference on Neural Networks, volume 3, pp. 1895–1899, 2005.
 Hochreiter et al. (2003) S. Hochreiter, M. C. Mozer, and K. Obermayer. Coulomb classifiers: Generalizing support vector machines via an analogy to electrostatic systems. In Advances in Neural Information Processing Systems 15, pp. 545–552. 2003.
 Isola et al. (2017) P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros. Imagetoimage translation with conditional adversarial networks. ArXiv eprints, 2017.
 Kawaguchi (2016) K. Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems 29, pp. 586–594, 2016.
 Klambauer et al. (2017) G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter. Selfnormalizing neural networks. ArXiv eprints, 1706.02515, 2017.
 LeCun et al. (1998) Y. LeCun, L. Bottou, G. Orr, and K. R. Müller. Efficient BackProp. In Neural Networks: Tricks of the Trade, pp. 9–50, London, UK, 1998. SpringerVerlag.

Ledig et al. (2016)
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz,
Z. Wang, and W. Shi.
Photorealistic single image superresolution using a generative adversarial network.
ArXiv eprints, 2016.  Li et al. (2017) CL. Li, WC. Chang, Y. Cheng, Y. Yang, and B. Póczos. MMD GAN: Towards Deeper Understanding of Moment Matching Network. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 2200–2210. 2017.
 Li et al. (2015) Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pp. 1718–1727, 2015.
 Lim & Ye (2017) J. H. Lim and J. C. Ye. Geometric GAN. ArXiv eprints, 2017.

Liu et al. (2015)
Z. Liu, P. Luo, X. Wang, and X. Tang.
Deep learning face attributes in the wild.
In
Proceedings of International Conference on Computer Vision (ICCV)
, 2015.  Metz et al. (2016) L. Metz, B. Poole, D. Pfau, and J. SohlDickstein. Unrolled generative adversarial networks. ArXiv eprints, 2016.
 Mroueh et al. (2017) Y. Mroueh, T. Sercu, and V. Goel. McGan: Mean and covariance feature matching GAN. ArXiv eprints, 2017.
 Perrone & Cooper (1995) M. P. Perrone and L. N. Cooper. Coulomb potential learning. In M. A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, pp. 272–275, Cambridge, MA, 1995. The MIT Press.
 Radford et al. (2016) A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. International Conference on Learning Representations (ICLR), 2016.
 Robbins & Monro (1951) H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Statist., 22(3):400–407, 1951. doi: 10.1214/aoms/1177729586.
 Salimans et al. (2016) T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training GANs. ArXiv eprints, 2016.
 Schwartz (1972) M. Schwartz. Principles of Electrodynamics. McGrawHill, 1972.
 Yu et al. (2015) F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao. LSUN: construction of a largescale image dataset using deep learning with humans in the loop. ArXiv eprints, 2015.
 Zhao et al. (2017) J. J. Zhao, M. Mathieu, and Y. LeCun. Energybased generative adversarial network. International Conference on Learning Representations (ICLR), 2017.
Appendix A Appendix
a.1 Example of Convergence to Mode Collapse in Conventional GANs
As an example of how a GAN can converge to a Nash Equilibrium that exhibits mode collapse, consider a target distribution that consists of two distinct/nonoverlapping regions of support and that are distant from each other, i.e., the target probability is zero outside of and . Further assume that 50 % of the probability mass is in and 50 % in . Assume that the the generator has modecollapsed onto , which contains 100 % of the generator’s probability mass. In this situation, the optimal discriminator classifies all points from as ”real“ (pertaining to the target distribution) by supplying an output of for them ( is the target for real samples and the target for generated samples). Within , the other region, the discriminator sees twice as many generated data points as real ones, as 100 % of the probability mass of the generator’s distribution is in , but only 50 % of the probability mass of the real data distribution. So one third of the points seen by the discriminator in are real, the other 2 thirds are generated. Thus, to minimize its prediction error for a proper objective (squared or cross entropy), the discriminator has to output for every point from . The optimal output is even independent of the exact form of the real distribution in . The generator will match the shape of the target distribution locally. If the shape is not matched, local gradients of the discriminator with respect to its input would be present and the generator would improve locally. If local improvements of the generator are no longer possible, the shape of the target distribution is matched and the discriminator output is locally constant. In this situation, the expected gradient of the discriminator is the zero vector, because it has reached an optimum. Since the discriminator output is constant in (and ), the generator’s expected gradient is the zero vector, too. The situation is also stable even though we still have random fluctuations from the ongoing stochastic gradient (SGD) learning: whenever the generator produces data outside of (but close to) , the discriminator can easily detect this and push the generator’s samples back. Inside , small deviations of the generator from the shape of the real distribution are detected by the discriminator as well, by deviating slightly from . Subsequently, the generator is pushed back to the original shape. If the discriminator deviates from its optimum, it will also be forced back to its optimum. So overall, the GAN learning reached a local Nash equilibrium and has converged in the sense that the parameters fluctuate around the attractor point (fluctuations depend on learning rate, sample size, etc.). To achieve true mathematical convergence, Heusel et al. (2017) assume decaying learning rates to anneal the random fluctuations, similar to Robbins & Monro (1951) original convergence proof for SGD.
a.2 Proof of Theorem 1
We first recall Theorem 1:
Theorem (Convergence with lowdimensional Plummer kernel).
For , , and the densities and equalize over time when minimizing energy with the lowdimensional Plummer kernel by gradient descent. The convergence is faster for larger .
In a first step, we prove that for local maxima or local minima of , the expression holds for small enough. For proving this equation, we apply the Laplace operator for spherical coordinates to the lowdimensional Plummer kernel. Using the result, we see that the integral is dominated by large negative values of around . These negative values can even be decreased by decreasing . Therefore we can ensure by a small enough that at each local minimum and local maximum of . Thus, the maximal and minimal points of move toward zero.
In a second step, we show that new maxima or minima cannot appear and that the movement of toward zero stops at zero and not earlier. Since is continuously differentiable, all points in environments of maxima and minima move toward zero. Therefore the largest moves toward zero. We have to ensure that moving toward zero does not converge to a point apart from zero. We derive that the movement toward zero is lower bounded by . Thus, the movement slows down at . Solving the differential equation and applying it to the maximum of the absolute value of gives . Thus, converges to zero over time.
Proof.
For , we have , where the theorem has already been proved for small enough (Hochreiter & Obermayer, 2005).
At each local minimum and local maximum of we have . Using the product rule for the divergence operator, Eq. (3) reduces to
(11) 
The term can be expressed as
(12) 
We next consider for the lowdimensional Plummer kernel. We define the spherical Laplace operator in dimensions as , then the Laplace operator in spherical coordinates is (Proposition 2.5 in Frye & Efthimiou (Efthimiou & Frye, 2014)):
(13) 
Note that only has second order derivatives with respect to the angles of the spherical coordinates.
With we obtain for the Laplace operator applied to the lowdimensional Plummer kernel:
(14) 
and in particular
(15) 
For we have , and obtain
(16) 
and
(17) 
and
(18) 
Therefore, is negative with minimum at and increasing with and increasing with for . For we have to restrict in the following the sphere to and ensure increase of with .
If , then we define a sphere with radius around for which holds for each . Note that is continuous differentiable. We have
(19)  
We bound by
(20) 
Using , we now bound independently from , since is a difference of distributions. For small enough we can ensure
(21) 
Therefore we have
(22) 
Therefore we have at each local minimum and local maximum of
(23) 
Therefore the maximal and minimal points of move toward zero. Since is continuously differentiable as is the field, also the points in an environment of the maximal and minimal points move toward zero. Points that are not in an environment of the maximal or minimal points cannot become maximal points in an infinitesimal time step.
Since the contribution of environment dominates the integral Eq. (19), for small enough there exists a positive globally for all minima and maxima as well as for all time steps for which holds:
(24) 
The factor depends on and on the initial . is proportional to . Larger lead to larger since the maximum or minimum is upweighted. There might exist initial conditions for which , e.g. for infinite many maxima and minima, but they are impossible in our applications.
Therefore maximal or minimal points approach zero faster or equal than given by
(25) 
In particular this differential equation dominates the global maximum of . Solving the differential equation gives that at least
(26) 
Thus influences the worst case rate of convergence, where larger with leads to faster worst case convergence.
Consequently, converges to the zero function over time, that is, becomes equal to . ∎
a.3 Proof of Theorem 2
We first recall Theorem 2:
Theorem (Optimal Solution).
If the pair is a local Nash equilibrium for the Coulomb GAN objectives, then it is the global Nash equilibrium, and no other local Nash equilibria exist, and has output distribution .
Proof.
being in a local Nash equilibrium means that fulfills the two conditions
(27) 
for some neighborhoods and . For Coulomb GANs that means, has learned the potential induced by perfectly, because is convex in , thus if is optimal within an neighborhood , it must be the global optimum. This means that is directly minimizing . The Coulomb potential energy is according to Eq. (7)
(28) 
Only the samples from stem from the generator, where . Here is the distribution centered at zero. The part of the energy which depends on the generator is
(29)  
Theorem 1 guarantees that there are no other local minima except the global one when minimizing . has one minimum, , which implies and for all , therefore also according to Theorem 1. Each would mean there exist potential differences which in turn would cause forces on generator samples that allow to further minimize the energy. Since we assumed that the generator can reach the minimum for any , it will be reached by local (stepwise) optimization of with respect to . Since the pair is optimal within their neighborhood, the generator has reached this minimum as there is not other local minimum than the global one. Therefore has model density with . The convergence point is a global Nash equilibrium, because there is no approximation error and zero energy is a global minimum for discriminator and generator, respectively. Theorem 1 ensures that other local Nash equilibria are not possible. ∎
a.4 Coulomb Equations in the case of Finite Samples
GANs are samplebased, that is, samples are drawn from the model for learning (Hochreiter & Obermayer, 2005; Gutmann & Hyvärinen, 2012). Typically this is done in minibatches, where each minibatch consists of two sets of samples, the target samples , and the model samples .
For such finite samples, i.e. point charges, we have to use delta distributions to obtain unbiased estimates of the the model distribution and the target distribution :
(30) 
where is the Dirac distribution centered at zero. These are unbiased estimates of the underlying distribution, as can be seen by:
(31) 
In the rest of the paper, we will drop the explicit parameterization with and for all estimates to unclutter notation, and instead just use the hat sign to denote estimates. In the same fashion as for the distributions, when we use fixed samples and , we obtain the following unbiased estimates for the potential, energy and field given by Eq. (5), Eq. (6), and Eq. (7):
(32)  
(33)  
(34)  
These are again unbiased, e.g.:
(35)  
If we draw samples of infinite size, all these expressions for a fixed sample size lead to the equivalent statements for densities. The samplebased formulation, that is, point charges in physical terms, can only have local energy minima or maxima at locations of samples (Dembo & Zeitouni, 1988). Furthermore the field lines originate and end at samples, therefore the field guides model samples toward real world samples , as depicted in Figure 1. The factors and in the last equations arise from the fact that gives the force which is applied to a sample with charge. A sample is positively charged with and follows while a sample is negatively charged with and therefore follows , too. Thus, following the force induced on a sample by the field is equivalent to gradient descent of the energy with respect to samples and .
a.5 Mixture of Gaussians
We use the synthetic data set introduced by Lim & Ye (2017)
to show that Coulomb GANs avoid mode collapse and that all modes of the target distribution are captured by the generative model. This data set comprises 100K data points drawn from a Gaussian mixture model of 25 components which are spread out evenly in the range
, with each component having a variance of 1. To make results comparable with Lim & Ye (2017), the Coulomb GAN used a discriminator network with 2 hidden layers of 128 units, however we avoided batch normalization by using the ELU activation function (Clevert et al., 2016). We used the Plummer kernel in 3 dimensions () with an epsilon of 3 () and a learning rate of 0.01, both of which were exponentially decayed during the 1M update steps of the Adam optimizer.As can be seen in Figure 4, samples from the learned Coulomb GAN very well approximate the target distribution. All components of the original distribution are present at the model distribution at approximately the correct ratio, as shown in Figure 5. Moreover, the generated samples are distributed approximately according to the same spread for each component of the real world distribution. Coulomb GANs outperform other compared methods, which either fail to learn the distribution completely, ignore some of the modes, or do not capture the withinmode spread of a Gaussian. The Coulomb GAN is the only GAN approach that manages to avoid a withincluster collapse leading to insufficient variance within a cluster.
a.6 Pseudocode for Coulomb GANs
The following gives the pseudo code for training GANs. Note that when calculating the derivative of , it is important to only derive with respect to , and not wrt. , even if it can happen that e.g.
. In frameworks that offer automatic differentiation such as Tensorflow or Theano, this means stopping the possible gradient backpropagation through those parameters.
Comments
There are no comments yet.