. The core principle of GANs is to pit two models, most commonly Neural Networks (NNs), against each other in a game theoretic way. The first NN, denoted generator, tries to fit the data distribution of a dataset , and the second network, denoted discriminator, learns to distinguish between generated data and real data. Both networks learn during a so called GAN game and the final output is a generator network, which fits the real data distribution. Still, the optimization dynamics of those networks are notoriously difficult and not well understood , leading to survey works concluding that no work has yet consistently outperformed the original non-saturating GAN formulation . One key theoretical advancement is, that the previosly used Jensen-Shannon divergence is ill defined in case of limited overlap . One common way to cirumvent this problem is to use different loss functions like the non-saturating loss  or the Wasserstein distance . Minimizing the Wasserstein distance yields clear convergence guarantees, given that the generator network is powerful enough . Still current formulations of the Wasserstein GAN (WGAN) heavily dependent on the hyperparameter setting . Our aim with this work is to explain why this is the case and what can be done to train WGANs successfully.
We review the usage of the Wasserstein distance as it is utilized in generative modelling, showcase the pitfalls of various algorithms and we propose possible alternatives.
As summary, our contributions are as follows:
A review and overview of common WGAN algorithms and their respective limitations.
A practical guide on how to apply WGANs to new datasets.
An extension to the squared entropy regularization for Optimal Transport , by using the Bregman distance and moving the center of the regularization.
An extension on the currently available approaches to ensure Lipschitz continuous discriminator networks.
The remainder of this paper is organized as follows. In Section 2, a recap of the Wasserstein distance in the context of GANs is given. Section 3 and 4 describe all the algorithms in detail. Section 5, shows our experimental results. Our findings are summarized in Section 6 and conclusions given in Section 7.
2 Preliminaries: Wasserstein Distance
The p-th Wasserstein distance is defined between two probability distributions on a metric space as follows:
where defines the ground cost. In this work, only the
distance is considered in a discrete setting. This simplifies the whole problem to the following linear program:
where , and for any function (not necessarily a distance) . This optimization problem has the following dual formulation:
Based on the optimality condition of linear programing an analytical solution for is given by . By replacing the dual variables with functions, namely and , the following formulation is obtained:
In case is a distance , it has been proven in  that . Using that result and rearranging the constraints yields , which is satisfied for all functions, which have Lipschitz constant . This establishes the Kantorovich-Rubinstein duality, which is used in WGANs:
The objective in WGANs is to leverage the Wasserstein distance to train a NN to model the underlying distribution , given an empirical distribution . In the GAN framework the generated distribution is constructed by using a known base distribution e.g. , and transforming using a NN with parameters as follows: . The parameters are then learned by minimizing the distance between the parametric distribution and the empiric one () using the following loss function:
Due to changes in the generator parameters during the optimization process, the Wasserstein distance problem changes and is reevaluated in each iteration. Therefore, the speed of computation is essential. In the OT literature an additional regularization term is added to improve the speed of convergence, while yielding sub-optimal results . This results in the following formulation:
We discriminate between two different methodologies of algorithms, namely sub-optimal fullbatch methods and stochastic methods. The following algorithms for solving the Wasserstein distance problem to learn generative models are incorporated in our work:
The main iterations for all algorithms are detailed in the supplementary material.
3 Fullbatch Methods
Fullbatch estimation means taking a data-batch of size of both probability densities and solving the Wasserstein distance for this subset . The idea is that the estimated Wasserstein distance is representative for the entire dataset. This is done by setting the probability for each image in the batch to . By the optimality conditions of convex problems, the so-called transport map is recovered. is a mapping between elements in and and is plugged into the following equation to learn the generative model:
There are two main advantages of doing the fullbatch estimation. First, the convex solvers have convergence guarantees, which are easily checked in practice . Second, the convergence speed is faster than with stochastic estimates .
3.1 Unregularized Solver
As a baseline, a solver for the unregularized Wasserstein distance is proposed. Therefore, the Primal-Dual Hybrid Gradient (PDHG) method  is used. To apply the PDHG, the problem is transformed into a saddle point problem as follows:
are reshaped to one dimensional vectorsand the constraints , are combined to . The full computation of the saddle point formulation and the steps of the algorithms are shown in the supplementary material.
3.2 Regularized Optimal Transport
Generative modelling solving the unregularized Wasserstein distance problem is computationally infeasible . However, the solution to this problem in the OT literature is to solve for the regularized Wasserstein distance instead . In this work, either negative entropy regularization or quadratic regularization are utilized, which are defined as:
Negative entropy regularization leads to the Sinkhorn algorithm . While the Sinkhorn algorithm converges rapidly, it also tends to be numerically unstable and only a small range of values for lead to satisfactory results. One approach to reduce instabilities is to adopt a Bregman distance111The Bregman distance is defined as follows: based proximal regularization term:
Xie et al.  proposed to use a modified Sinkhorn-Knopp algorithm (Sinkhorn-Center) with the steps given in the supplementary material.
Another way to combat the numerical stability problems and blurry transport maps is to use quadratic regularization . By plugging the quadratic regularization into the regularized Wasserstein distance, the following dual function is obtained:
The dual problem can be directly solved by the FISTA algorithm . FISTA was chosen due to its optimal convergence guarantees for problems like this and due to its simple iterates as is shown in the supplementary material (Alg. 5). The transport map is given by: . To improve the convergence speed and allow higher values for , we also consider a proximal regularized version. The cost function, whose derivation is contained in the supplementary material is:
This is again solved using the FISTA algorithm in Alg. 6, with .
4 Stochastic Estimation Methods
Full batch methods rely on the option to use batches, which are indicative for the entire problem. The required batch size is enormous for large scaled tasks . In practice, the fact that close points in the data space have similar values for their Lagrangian multipliers suggest the usage of functions, which have this property intrinsically. Therefore, the Wasserstein distance is commonly approximated with a NN. The Kantorovich-Rubinstein duality leads to a natural formulation using a NN, named :
The key part of this formulation is the Lipschitz constraint . In practice, one of two ways is used to ensure the Lipschitzness of a NN, which is either by adding a constraint penalization to the loss function or constraining the NN to only allow 1-Lipschitz functions.
4.1 Lipschitz Regularization
Here the following observation is used. If holds for , then . By observing this fact, a simple regularization scheme, named gradient penalty, has been proposed and is widely used in practice :
One thing to note in this formulation the number of constraints is proportional to the product of the number of samples, generated images and the granularity of , which makes the algorithm only slowly converging.
4.2 Lipschitz Constrained NN
One can interpret NNs as hierarchical functions, which are composed of matrix multiplications, convolutions (also denoted as matrix multiplications) and non-linear activation functions:
The Lipschitz constants of such a function can be bounded from above by the product of the Lipschitz constant of its layers:
. Therefore, if each layer is 1-Lipschitz the entire NN is 1-Lipschitz. Common activation functions like ReLU, leaky ReLU, sigmoid, tanh, and softmax are 1-Lipschitz. Therefore, if the linear mapsare 1-Lipschitz so is the entire network as well . The Lipschitz constant of the linear maps are given by their spectral norm . In the WGAN-SN algorithm, the spectral norm is computed using the power method . The power method (Alg. 1
) converges linearly depending on the ratio of the two largest eigenvectors: . For matrices this is done by using simple matrix multiplications. For convolutions, in the WGAN-SN algorithm the filter kernels are reshaped to 2D, the power method is applied and then the result is reshaped back . It is trivial to construct cases, where this is arbitrarily wrong  and in Fig. 4 the deviation from -Lipschitzness is demonstrated. A more detailed example is shown in the supplementary material. Therefore, a mathematically correct algorithm, namely the WGAN-SNC is proposed, where we apply a forward and a backward convolution onto a vector in each iteration, which actually mimics the matrix multiplication of the induced matrix by the convolution. Gouk et al.  proposed a similar power method for classification and projected the weights back onto the feasible set after each update step for a classification network. In the supplementary material it is shown empirically on simple examples that this is too prohibitive to estimate the Wasserstein distance reliably. Therefore, the WGAN-SNC algorithm applies power method as a projection layer, similar to the WGAN-SN algorithm. In that layer, the
variable persists across update steps, an additional iteration is run during training, and the projection is used for backpropagation.
The base architecture for all the NNs in this work is a standard convolutional NN as is used by the WGAN-SN , which is based on the DCGAN . Details are described in the supplementary material. The default optimizer is the Adam optimizer with the parameter setting from the WGAN-GP  setting (, ). We use 1 discriminator iteration for WGAN-SN(C) algorithms and 5 for WGAN-GP.
5.1 MNIST Manifold Comparison
. For this example a generator NN was trained with 1 hidden layer with 500 neurons takingas input and producing an image as output. This network is then trained using the Sinkhorn-Knopp algorithm on a batch of samples, the manifold of which is shown in Fig. 1. In accordance to the image processing literature, the L1 norm produces crisper images and transitions between the images then the other cost functions. However, not all the images in the manifold show digits. On the other hand the L2 norm produces digit images everywhere, similar to the output of the WGAN-GP algorithm on large datasets, but the transitions are blurry. The cosine-distance is just a normalized and squared L2-distance. Still, the resulting manifold is quite different, as it fails to capture all the digits. Also the images are blurrier than using the actual L2-norm. This leads to the conclusion that by normalizing the images, information is lost and it is harder to separate different images. While the SSIM does generate realistic digit images, it fails at capturing the entire distribution of images, e.g. digits or do not occur in the manifold.
Impact of different cost functions on the MNIST manifold, trained using a Sinkhorn-GAN. Notice, the different interpolations between the digits (L1 sharper, L2 blurrier) and image quality (L1 some images show no digit) and the occurrence of each digit in the manifold (SSIM is missing 2,4,5,6).
5.2 Hyperparameter Dependence
In this section the hyperparameter dependence of the stochastic algorithms is tested on simple image based examples. For this reason, two batches with size are sampled randomly from the CIFAR dataset and the Wasserstein distance is estimated based on those samples. is used, due to memory restrictions of our GPU and therefore being able to use full batch gradient descent, even for the NN approaches. The results are shown in Fig. 2. One can see the stability of the gradient penalty depends on the learning rate of the optimizer and the setting for the lagrangian multiplier . That parameter sensitivity explains the common observation that the Wasserstein estimate heavily oscillates in the initial iterations of the generator. Additionally, the estimate has still not converged even after full batch iterations.
As a comparison we show the dependence of the fullbatch estimates on their hyperparameter in Fig. 3. For the entropy regularized versions defines a tradeoff between numerical stability and getting good results, where moving the center drastically increases the range of good values. On the other hand, for the quadratically regularized versions, is a tradeoff between the runtime and the quality of the estimation.
5.3 Limitations of Fullbatch Methods
In the following two experiments, we show that cost functions without adversarial training do not work well without enormous batch sizes. To showcase, the ability of current generative models, we learn a mapping from a set of noise vectors to a set of images using the Wasserstein distance for a batch of size 4000. The resulting images are shown in the supplementary material for all algorithms. To show that this is not easy to scale to larger datasets, another experiment is designed: the transport map is learned for two different batches taken from CIFAR. The results for an ever increasing number of samples in a batch is shown in Fig. 5. This shows an empirical evaluation of the statistical properties of the Wasserstein distance. The estimate decreases with sample size in the order [4, 27]. Even though, those images are sampled from the same distribution, the cost is still higher than using blurred images. Samples produced by the sinkhorn solver in Fig.8 produce an average estimate of of the Wasserstein distance, which is smaller than samples taken from the dataset itself using the L2-norm. Therefore, it is better for the generator to produce samples like this.
5.4 Comparison of Algorithms
Salimans et al.  demonstrated, that with enough computational power it is possible to get state-of-the-art performance using fullbatch methods. However, we do not posses that kind of computational power and therefore the setting proposed by Genevay et al.  was used. We used a standard DCGAN, with a batchsize of , which reproduces their results for the Sinkhorn GAN. For the full batch methods, instead of minimizing the Wasserstein distance, the Sinkhorn divergence222Sinkhorn divergence is minimized instead . The cost function is the mean squared error on an adversarial learned feature space. If the adversarial feature space is only learned on the Wasserstein distance, then features are just pushed away from each other. The Sinkhorn divergence on the other hand also has attractor terms, which forces the network to encode images from the same distribution similarly.
Related GAN algorithms, which also use full batch solvers have been evaluated using the Inception Score  (IS) and a comparison to them is shown in Tab. 1. For the other methods, we also evaluated using the Fréchet Inception Distance (FID)  in Tab. 2. In the constrained setting, the fullbatch WGANs in their current form are competitive or better to similar fullbatch algorithms like the MMD GAN. However, the stochastic methods work better for larger scaled tasks. The WGAN-SN performs better than the one using Gradient Penalty, and performs similarly to the WGAN-SNC. One thing to note is that the WGAN-SN actually works better using a hinge loss, even though no theoretical justification is given for that .
Based on our experimental results, we want to share the following empirical insights to successfully train WGANs for various tasks.
Stochastic vs full batch Estimation: If it is possible to compute the Wasserstein distance for a given problem accurately enough with a full batch approach, then there are a lot of advantages using this approach, like convergence guarantees, better Wasserstein estimates and a clear interpretation of hyperparameters. Depending on the batch size the optimization algorithm might be quite slow though. On the other hand full batch estimation is not possible with simple cost functions for real world image datasets as we show for the CIFAR dataset (see Fig. 8).
Useful Baseline: The batch wise estimation gives an indication of the Wasserstein distance given two batches of the dataset. This in turn is used to give an estimate on how well the NN architecture and training algorithm are able to fit the data. (see Fig.2)
Full batch estimation
Squared vs Entropy regularization (see Fig. 5): entropy regularization converges extremely fast for large values of , however, the performance quickly deteriorates even for small changes in . Quadratic regularization on the other hand works numerically very stable for any value of we tested (see Fig. 3). Proximal regularization allows for more stability in the algorithm by only minorly changing the algorithm.
Batch size: As a rule of thumb a larger batchsize is better than a smaller one to accurately estimate the Wasserstein distance. Estimating the necessary batchsize is done using indicative batches (e.g. starting batch and batches from the data distribution).
Convergence guarantees: Full batch methods provably converge to the global optimal solution and therefore accurately estimate the Wasserstein distance with a clear meaning of each hyperparameter (see Fig. 3).
Convergence: In principle, methods based on NNs, take longer to converge, have no convergence guarantees and it is hard to tell, if they really approximate a Wasserstein distance. Additionally, it is unclear how gradients of intermediate approximations relate to a converged approximation, resulting in the mystifying nature of WGAN training.
Projection: Projecting onto the feasible set is too restrictive. Therefore, the projection is done as part of the loss function (see Fig. 6).
Gradient norm of NNs: Current methods to ensure Lipschitzness in NNs have in common, that while the actual Lipschitz constant is different from , it is empirically stable. (see Fig. 4)
7 Conclusions & Practical Guide
We have reviewed and extended various algorithms for computing and minimizing the Wasserstein distance between distributions as part of a large generative systems. To make use of those insights in ones problems is to look at the Wasserstein distance between indicative batches, e.g. the initial batches produced by the generator and batches from the data distribution. This also gives a way to gauge how long a NN will take to converge and which hyperparameters have an impact on the estimation. Estimating the Wasserstein distance on indicative batches can safely be done with a regularized solver, due to the small differences in the Wasserstein estimates. For entropy regularization, we encourage to use proximal regularization. If the full batch estimation of the gradient is sufficient, then using a full batch GANs provides reliable results. However, for most GAN benchmarks this is not the case and then Gradient Penalty tends to work well, but is really slow. WGAN-SN is a lot faster, but mathematically incorrect. We propose a theoretically sound version of this, while showing similar performance on CIFAR. The cost function used in the Wasserstein distance controls the geometry of the generated manifold and therefore determines the interpolations between the images. The high cost between different samples taken from the same dataset shows problems with current non-adversarial cost functions on generative tasks and is a first step towards modelling better cost functions.
-  (2017) Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862. Cited by: §1.
-  (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §1.
-  (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences 2 (1), pp. 183–202. Cited by: §3.2.
-  (2017) Central limit theorems for sinkhorn divergence between probability distributions on finite spaces and statistical applications. arXiv preprint arXiv:1711.08947. Cited by: §5.3.
-  (2017) Smooth and sparse optimal transport. arXiv preprint arXiv:1710.06276. Cited by: 3rd item, §3.2, §3, §8.2.
-  (2011) A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of mathematical imaging and vision 40 (1), pp. 120–145. Cited by: §3.1.
-  (2018) Image blind denoising with generative adversarial network based noise modeling. In , pp. 3155–3164. Cited by: §1.
-  (2013) Sinkhorn distances: lightspeed computation of optimal transport. In Advances in neural information processing systems, pp. 2292–2300. Cited by: item 1iA, §2, §3.2.
Learning generative models with sinkhorn divergences.
International Conference on Artificial Intelligence and Statistics, pp. 1608–1617. Cited by: §3.2, §5.4, Table 1, Table 2.
-  (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
-  (2018) Regularisation of neural networks by enforcing lipschitz continuity. arXiv preprint arXiv:1804.04368. Cited by: §4.2.
-  (2017) Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5769–5779. Cited by: item 2(a)i, §4.1, §5.
-  (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: §5.4.
-  (2018) A style-based generator architecture for generative adversarial networks. arXiv preprint arXiv:1812.04948. Cited by: §1.
Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint. Cited by: §1.
-  (2018) Are gans created equal? a large-scale study. In Advances in neural information processing systems, pp. 700–709. Cited by: §1, 3rd item.
-  (2018) Differential properties of sinkhorn approximation for learning with wasserstein distance. In Advances in Neural Information Processing Systems, pp. 5859–5870. Cited by: §3.2.
-  (2017) The numerics of gans. In Advances in Neural Information Processing Systems, pp. 1823–1833. Cited by: §1.
-  (2018) Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957. Cited by: item 2(b)i, §4.2, §5.4, Table 2, §5, 3rd item, Figure 7, §8.4, §8.4.
Computational optimal transport.
Foundations and Trends® in Machine Learning11 (5-6), pp. 355–607. Cited by: §4.
-  (2018) Do gan loss functions really matter?. arXiv preprint arXiv:1811.09567. Cited by: §4.
-  (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §5.
-  (2016) Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2234–2242. Cited by: §5.4.
-  (2018) Improving gans using optimal transport. arXiv preprint arXiv:1803.05573. Cited by: §3, §5.1, §5.4.
-  (2018) On the convergence and robustness of training gans with regularized optimal transport. In Advances in Neural Information Processing Systems, pp. 7091–7101. Cited by: §3.
-  (2017) Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2107–2116. Cited by: §1.
-  (2018) Minimax distribution estimation in wasserstein distance. arXiv preprint arXiv:1802.08855. Cited by: §5.3.
-  (2008) Optimal transport: old and new. Vol. 338, Springer Science & Business Media. Cited by: §2.
-  (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §5.1.
-  (2018) A fast proximal point method for wasserstein distance. arXiv preprint arXiv:1802.04307. Cited by: item 1iB, §3.2, §3.
-  (2017) Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging 3 (1), pp. 47–57. Cited by: 2nd item.
Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint. Cited by: §1.
8 Supplementary Material
8.1 Derivation of the PDHG algorithm
The initial starting formulation is given as follows:
By forming the Lagrangian of this formulation and reformulating this yields:
By reshaping to one dimensional vectors and by combining the constraints , to the following saddle point formulation is obtained:
The iterates are given in Alg. 2.
8.2 Derivation of quadratic regularization
The following problem is solved in this section:
To solve this we make use of the dual formulation as shown by Blondel et al.  as follows:
They also showed that by using the convex conjugate this is reformulated to the following dual problem:
To solve for , we propose to use Alg. 5.
8.3 Calculation of FISTA-CENTER
The following problem is solved in this section:
for . This is simplified by the next sequence of equations:
Plugging this back into the formulation shown in Eq. (22) yields:
By solving for analytically, the following result is obtained:
Plugging this back into the initial equation and setting finally results to:
Here, the gradient with respect to is simply:
The gradient with respect to is given in a similar fashion. Using the gradient, we apply the FISTA algorithm again to solve for . The new transport plan for this algorithm is then given by . This way it is possible to improve on the solution of the squared regularization for a given
8.4 Spectral Norm Regularization
This is in contrast to the actual SN-GAN , where the derivative is calculated through the algorithm. In the context of the Wasserstein GAN, however Fig. 6 demonstrates that the projection approach empirically does not even work for the simplest examples.
Fig. 7 is a demonstration, that the Lipschitz constant of the WGAN-SN increases with the depth of the network, but not with the filter kernels. That the performance of the discriminator increases as you increase the filter weight is shown in the experimental results by Miyato et al.  to demonstrate the wide applicability of their algorithm.
8.5 Comparison Image Reconstruction Quality
In Fig. 8 we show the reconstruction capabilities of our full batch algorithms. Therein, are reconstructed using the GAN algorithm with a fixed noise set. The estimated Wasserstein distance is then used to learn to reconstruct this dataset using a standard NN.
In Fig. 9 we show the failure mode of full batch methods, if the batch size is insufficient for the cost function, which in this case is and L2-norm. In Section 5 in Fig. 5 it is shown that the Wasserstein distance using the L2-norm is , while those images produce a Wasserstein distance of .