On the estimation of the Wasserstein distance in generative models

10/02/2019 ∙ by Thomas Pinetz, et al. ∙ 31

Generative Adversarial Networks (GANs) have been used to model the underlying probability distribution of sample based datasets. GANs are notoriuos for training difficulties and their dependence on arbitrary hyperparameters. One recent improvement in GAN literature is to use the Wasserstein distance as loss function leading to Wasserstein Generative Adversarial Networks (WGANs). Using this as a basis, we show various ways in which the Wasserstein distance is estimated for the task of generative modelling. Additionally, the secrets in training such models are shown and summarized at the end of this work. Where applicable, we extend current works to different algorithms, different cost functions, and different regularization schemes to improve generative models.



There are no comments yet.


page 17

page 19

page 21

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

GANs [10] have been successfully applied to tasks ranging from superresolution [15], denoising [7], data generation [2], data refinement [26], style transfer [32], and to many more [14]

. The core principle of GANs is to pit two models, most commonly Neural Networks (NNs), against each other in a game theoretic way 

[10]. The first NN, denoted generator, tries to fit the data distribution of a dataset , and the second network, denoted discriminator, learns to distinguish between generated data and real data. Both networks learn during a so called GAN game and the final output is a generator network, which fits the real data distribution. Still, the optimization dynamics of those networks are notoriously difficult and not well understood [18], leading to survey works concluding that no work has yet consistently outperformed the original non-saturating GAN formulation [16]. One key theoretical advancement is, that the previosly used Jensen-Shannon divergence is ill defined in case of limited overlap [1]. One common way to cirumvent this problem is to use different loss functions like the non-saturating loss [10] or the Wasserstein distance [2]. Minimizing the Wasserstein distance yields clear convergence guarantees, given that the generator network is powerful enough [2]. Still current formulations of the Wasserstein GAN (WGAN) heavily dependent on the hyperparameter setting [16]. Our aim with this work is to explain why this is the case and what can be done to train WGANs successfully.

We review the usage of the Wasserstein distance as it is utilized in generative modelling, showcase the pitfalls of various algorithms and we propose possible alternatives.

As summary, our contributions are as follows:

  • A review and overview of common WGAN algorithms and their respective limitations.

  • A practical guide on how to apply WGANs to new datasets.

  • An extension to the squared entropy regularization for Optimal Transport [5], by using the Bregman distance and moving the center of the regularization.

  • An extension on the currently available approaches to ensure Lipschitz continuous discriminator networks.

The remainder of this paper is organized as follows. In Section 2, a recap of the Wasserstein distance in the context of GANs is given. Section 3 and 4 describe all the algorithms in detail. Section 5, shows our experimental results. Our findings are summarized in Section 6 and conclusions given in Section 7.

2 Preliminaries: Wasserstein Distance

The p-th Wasserstein distance is defined between two probability distributions on a metric space as follows:


where defines the ground cost. In this work, only the

distance is considered in a discrete setting. This simplifies the whole problem to the following linear program:


where , and for any function (not necessarily a distance) . This optimization problem has the following dual formulation:


Based on the optimality condition of linear programing an analytical solution for is given by . By replacing the dual variables with functions, namely and , the following formulation is obtained:


In case is a distance , it has been proven in [28] that . Using that result and rearranging the constraints yields , which is satisfied for all functions, which have Lipschitz constant . This establishes the Kantorovich-Rubinstein duality, which is used in WGANs:


The objective in WGANs is to leverage the Wasserstein distance to train a NN to model the underlying distribution , given an empirical distribution . In the GAN framework the generated distribution is constructed by using a known base distribution e.g. , and transforming using a NN with parameters as follows: . The parameters are then learned by minimizing the distance between the parametric distribution and the empiric one () using the following loss function:


Due to changes in the generator parameters during the optimization process, the Wasserstein distance problem changes and is reevaluated in each iteration. Therefore, the speed of computation is essential. In the OT literature an additional regularization term is added to improve the speed of convergence, while yielding sub-optimal results [8]. This results in the following formulation:


We discriminate between two different methodologies of algorithms, namely sub-optimal fullbatch methods and stochastic methods. The following algorithms for solving the Wasserstein distance problem to learn generative models are incorporated in our work:

  1. Fullbatch Methods

    1. Unregularized Wasserstein Distance (Eq. (2))

      1. Primal Dual Hybrid Gradient solver (PDHG; Alg. 2)

    2. Regularized Wasserstein Distance (Eq. (7)

      1. Negative Entropy Regularization

        1. Sinkhorn [8] (Sinkhorn; Alg. 3)

        2. Sinkhorn-Center [30] (Sinhorn-Center; Alg. 4)

      2. Quadratic Regularization

        1. FISTA (FISTA; Alg. 5)

        2. FISTA-Center (FISTA-Center; Alg. 6)

  2. Stochastic Methods (Eq. (5))

    1. Regularized NNs:

      1. WGAN with Gradient Penalty [12] (WGAN-GP)

    2. Constrained NNs:

      1. WGAN with Spectral Normalization [19] (WGAN-SN)

      2. WGAN with convolutional Spectral Normalization (WGAN-SNC)

The main iterations for all algorithms are detailed in the supplementary material.

3 Fullbatch Methods

Fullbatch estimation means taking a data-batch of size of both probability densities and solving the Wasserstein distance for this subset . The idea is that the estimated Wasserstein distance is representative for the entire dataset. This is done by setting the probability for each image in the batch to . By the optimality conditions of convex problems, the so-called transport map is recovered. is a mapping between elements in and and is plugged into the following equation to learn the generative model:


Note, that it is not necessary to differentiate through the computation of , due to the envelope theorem as has been noted in [24, 30].

There are two main advantages of doing the fullbatch estimation. First, the convex solvers have convergence guarantees, which are easily checked in practice [5]. Second, the convergence speed is faster than with stochastic estimates [25].

3.1 Unregularized Solver

As a baseline, a solver for the unregularized Wasserstein distance is proposed. Therefore, the Primal-Dual Hybrid Gradient (PDHG) method [6] is used. To apply the PDHG, the problem is transformed into a saddle point problem as follows:



are reshaped to one dimensional vectors

and the constraints , are combined to . The full computation of the saddle point formulation and the steps of the algorithms are shown in the supplementary material.

3.2 Regularized Optimal Transport

Generative modelling solving the unregularized Wasserstein distance problem is computationally infeasible [9]. However, the solution to this problem in the OT literature is to solve for the regularized Wasserstein distance instead [17]. In this work, either negative entropy regularization or quadratic regularization are utilized, which are defined as:


Negative entropy regularization leads to the Sinkhorn algorithm [8]. While the Sinkhorn algorithm converges rapidly, it also tends to be numerically unstable and only a small range of values for lead to satisfactory results. One approach to reduce instabilities is to adopt a Bregman distance111The Bregman distance is defined as follows: based proximal regularization term:


Xie et al. [30] proposed to use a modified Sinkhorn-Knopp algorithm (Sinkhorn-Center) with the steps given in the supplementary material.

Another way to combat the numerical stability problems and blurry transport maps is to use quadratic regularization [5]. By plugging the quadratic regularization into the regularized Wasserstein distance, the following dual function is obtained:


The dual problem can be directly solved by the FISTA algorithm [3]. FISTA was chosen due to its optimal convergence guarantees for problems like this and due to its simple iterates as is shown in the supplementary material (Alg. 5). The transport map is given by: . To improve the convergence speed and allow higher values for , we also consider a proximal regularized version. The cost function, whose derivation is contained in the supplementary material is:


This is again solved using the FISTA algorithm in Alg. 6, with .

4 Stochastic Estimation Methods

Full batch methods rely on the option to use batches, which are indicative for the entire problem. The required batch size is enormous for large scaled tasks [20]. In practice, the fact that close points in the data space have similar values for their Lagrangian multipliers suggest the usage of functions, which have this property intrinsically. Therefore, the Wasserstein distance is commonly approximated with a NN. The Kantorovich-Rubinstein duality leads to a natural formulation using a NN, named :


The key part of this formulation is the Lipschitz constraint [21]. In practice, one of two ways is used to ensure the Lipschitzness of a NN, which is either by adding a constraint penalization to the loss function or constraining the NN to only allow 1-Lipschitz functions.

4.1 Lipschitz Regularization

Here the following observation is used. If holds for , then . By observing this fact, a simple regularization scheme, named gradient penalty, has been proposed and is widely used in practice [12]:


One thing to note in this formulation the number of constraints is proportional to the product of the number of samples, generated images and the granularity of , which makes the algorithm only slowly converging.

4.2 Lipschitz Constrained NN

One can interpret NNs as hierarchical functions, which are composed of matrix multiplications, convolutions (also denoted as matrix multiplications) and non-linear activation functions



The Lipschitz constants of such a function can be bounded from above by the product of the Lipschitz constant of its layers:



. Therefore, if each layer is 1-Lipschitz the entire NN is 1-Lipschitz. Common activation functions like ReLU, leaky ReLU, sigmoid, tanh, and softmax are 1-Lipschitz. Therefore, if the linear maps

are 1-Lipschitz so is the entire network as well [11]. The Lipschitz constant of the linear maps are given by their spectral norm . In the WGAN-SN algorithm, the spectral norm is computed using the power method [19]. The power method (Alg. 1

) converges linearly depending on the ratio of the two largest eigenvectors

:  [11]. For matrices this is done by using simple matrix multiplications. For convolutions, in the WGAN-SN algorithm the filter kernels are reshaped to 2D, the power method is applied and then the result is reshaped back [19]. It is trivial to construct cases, where this is arbitrarily wrong [11] and in Fig. 4 the deviation from -Lipschitzness is demonstrated. A more detailed example is shown in the supplementary material. Therefore, a mathematically correct algorithm, namely the WGAN-SNC is proposed, where we apply a forward and a backward convolution onto a vector in each iteration, which actually mimics the matrix multiplication of the induced matrix by the convolution. Gouk et al. [11] proposed a similar power method for classification and projected the weights back onto the feasible set after each update step for a classification network. In the supplementary material it is shown empirically on simple examples that this is too prohibitive to estimate the Wasserstein distance reliably. Therefore, the WGAN-SNC algorithm applies power method as a projection layer, similar to the WGAN-SN algorithm. In that layer, the

variable persists across update steps, an additional iteration is run during training, and the projection is used for backpropagation.

Result: The spectral normalized weight matrix
for k =  do
end for
Algorithm 1 Power method: Requires the matrix and number of iterations with default value .

5 Experiments

The base architecture for all the NNs in this work is a standard convolutional NN as is used by the WGAN-SN [19], which is based on the DCGAN [22]. Details are described in the supplementary material. The default optimizer is the Adam optimizer with the parameter setting from the WGAN-GP [12] setting (, ). We use 1 discriminator iteration for WGAN-SN(C) algorithms and 5 for WGAN-GP.

5.1 MNIST Manifold Comparison

Here, the impact of the cost function on the generated manifold is investigated. The L2-norm is compared to the L1-norm, cosine-distance [24] and SSIM distance [29]

. For this example a generator NN was trained with 1 hidden layer with 500 neurons taking

as input and producing an image as output. This network is then trained using the Sinkhorn-Knopp algorithm on a batch of samples, the manifold of which is shown in Fig. 1. In accordance to the image processing literature, the L1 norm produces crisper images and transitions between the images then the other cost functions. However, not all the images in the manifold show digits. On the other hand the L2 norm produces digit images everywhere, similar to the output of the WGAN-GP algorithm on large datasets, but the transitions are blurry. The cosine-distance is just a normalized and squared L2-distance. Still, the resulting manifold is quite different, as it fails to capture all the digits. Also the images are blurrier than using the actual L2-norm. This leads to the conclusion that by normalizing the images, information is lost and it is harder to separate different images. While the SSIM does generate realistic digit images, it fails at capturing the entire distribution of images, e.g. digits or do not occur in the manifold.

(a) Cosine-distance
(b) L2 Norm
(c) L1 Norm
(d) SSIM
Figure 1:

Impact of different cost functions on the MNIST manifold, trained using a Sinkhorn-GAN. Notice, the different interpolations between the digits (L1 sharper, L2 blurrier) and image quality (L1 some images show no digit) and the occurrence of each digit in the manifold (SSIM is missing 2,4,5,6).

5.2 Hyperparameter Dependence

In this section the hyperparameter dependence of the stochastic algorithms is tested on simple image based examples. For this reason, two batches with size are sampled randomly from the CIFAR dataset and the Wasserstein distance is estimated based on those samples. is used, due to memory restrictions of our GPU and therefore being able to use full batch gradient descent, even for the NN approaches. The results are shown in Fig. 2. One can see the stability of the gradient penalty depends on the learning rate of the optimizer and the setting for the lagrangian multiplier . That parameter sensitivity explains the common observation that the Wasserstein estimate heavily oscillates in the initial iterations of the generator. Additionally, the estimate has still not converged even after full batch iterations.

(a) WGAN-GP: Adam
(b) WGAN-GP: Adam
Figure 2: Optimizer impact on using Gradient Penalty. Notice that the stability changes as the learning rate is changed and the hyperparamter is changed. Additionally, the slow rate of convergence and the deviation from 1-Lipschitzness are shown here.

As a comparison we show the dependence of the fullbatch estimates on their hyperparameter in Fig. 3. For the entropy regularized versions defines a tradeoff between numerical stability and getting good results, where moving the center drastically increases the range of good values. On the other hand, for the quadratically regularized versions, is a tradeoff between the runtime and the quality of the estimation.

(a) WGAN-GP: Adam
(b) WGAN-GP: Adam
Figure 3: Impact of the regularization term on the estimated Wasserstein distance and the number of iterations until convergence. The stability issues of the Sinkhorn(-Center) algorithms for and are demonstrated here.
Figure 4: Empirical Gradnorm of the stochastic algorithms. WGAN-SNC produces stable gradients close to during training, while the other methods fail to do so.

5.3 Limitations of Fullbatch Methods

In the following two experiments, we show that cost functions without adversarial training do not work well without enormous batch sizes. To showcase, the ability of current generative models, we learn a mapping from a set of noise vectors to a set of images using the Wasserstein distance for a batch of size 4000. The resulting images are shown in the supplementary material for all algorithms. To show that this is not easy to scale to larger datasets, another experiment is designed: the transport map is learned for two different batches taken from CIFAR. The results for an ever increasing number of samples in a batch is shown in Fig. 5. This shows an empirical evaluation of the statistical properties of the Wasserstein distance. The estimate decreases with sample size in the order  [4, 27]. Even though, those images are sampled from the same distribution, the cost is still higher than using blurred images. Samples produced by the sinkhorn solver in Fig.8 produce an average estimate of of the Wasserstein distance, which is smaller than samples taken from the dataset itself using the L2-norm. Therefore, it is better for the generator to produce samples like this.

(a) Wasserstein distance
(b) Number of Iterations
Figure 5: Wasserstein distance and number of iterations until convergence for a specific batch size using L2-norm between random samples on CIFAR.

5.4 Comparison of Algorithms

Salimans et al. [24] demonstrated, that with enough computational power it is possible to get state-of-the-art performance using fullbatch methods. However, we do not posses that kind of computational power and therefore the setting proposed by Genevay et al. [9] was used. We used a standard DCGAN, with a batchsize of , which reproduces their results for the Sinkhorn GAN. For the full batch methods, instead of minimizing the Wasserstein distance, the Sinkhorn divergence222Sinkhorn divergence is minimized instead [9]. The cost function is the mean squared error on an adversarial learned feature space. If the adversarial feature space is only learned on the Wasserstein distance, then features are just pushed away from each other. The Sinkhorn divergence on the other hand also has attractor terms, which forces the network to encode images from the same distribution similarly.

Related GAN algorithms, which also use full batch solvers have been evaluated using the Inception Score [23] (IS) and a comparison to them is shown in Tab. 1. For the other methods, we also evaluated using the Fréchet Inception Distance (FID) [13] in Tab. 2. In the constrained setting, the fullbatch WGANs in their current form are competitive or better to similar fullbatch algorithms like the MMD GAN. However, the stochastic methods work better for larger scaled tasks. The WGAN-SN performs better than the one using Gradient Penalty, and performs similarly to the WGAN-SNC. One thing to note is that the WGAN-SN actually works better using a hinge loss, even though no theoretical justification is given for that [19].

-1cm MMD [9] Sinkhorn [9] Sinkhorn-Center FISTA FISTA-Center

Table 1: Inception Score comparison of full batch methods for CIFAR.
- FISTA-Center Sinkhorn [9] WGAN-GP [19] WGAN SN WGAN SN conv
FID - -
Table 2: Visual Quality comparison of Inception Score (IS, higher is better) and FID (lower is better). Note, that the WGAN SN results are our own, as the authors [19] did not evaluate the model using the Wasserstein loss.

6 Discussions

Based on our experimental results, we want to share the following empirical insights to successfully train WGANs for various tasks.

  • Stochastic vs full batch Estimation: If it is possible to compute the Wasserstein distance for a given problem accurately enough with a full batch approach, then there are a lot of advantages using this approach, like convergence guarantees, better Wasserstein estimates and a clear interpretation of hyperparameters. Depending on the batch size the optimization algorithm might be quite slow though. On the other hand full batch estimation is not possible with simple cost functions for real world image datasets as we show for the CIFAR dataset (see Fig. 8).

  • Cost function: The cost function controls the geometry of the resulting generated distribution (see Fig. 1). For example the L1-norm results in sharper images and sharper interpolations than the L2-norm [31] (see Fig. 1).

  • Useful Baseline: The batch wise estimation gives an indication of the Wasserstein distance given two batches of the dataset. This in turn is used to give an estimate on how well the NN architecture and training algorithm are able to fit the data. (see Fig.2)

  • Full batch estimation

    • Squared vs Entropy regularization (see Fig. 5): entropy regularization converges extremely fast for large values of , however, the performance quickly deteriorates even for small changes in . Quadratic regularization on the other hand works numerically very stable for any value of we tested (see Fig. 3). Proximal regularization allows for more stability in the algorithm by only minorly changing the algorithm.

    • Batch size: As a rule of thumb a larger batchsize is better than a smaller one to accurately estimate the Wasserstein distance. Estimating the necessary batchsize is done using indicative batches (e.g. starting batch and batches from the data distribution).

    • Convergence guarantees: Full batch methods provably converge to the global optimal solution and therefore accurately estimate the Wasserstein distance with a clear meaning of each hyperparameter (see Fig. 3).

  • Stochastic estimation

    • Convergence: In principle, methods based on NNs, take longer to converge, have no convergence guarantees and it is hard to tell, if they really approximate a Wasserstein distance. Additionally, it is unclear how gradients of intermediate approximations relate to a converged approximation, resulting in the mystifying nature of WGAN training.

    • Projection: Projecting onto the feasible set is too restrictive. Therefore, the projection is done as part of the loss function (see Fig. 6).

    • Hyperparameter dependence: Current methods are extremely dependent on hyperparameters (GP on  [16], and on the optimizer [19] and SN on the network architecture) (see Fig. 2 and Fig. 7).

    • Gradient norm of NNs: Current methods to ensure Lipschitzness in NNs have in common, that while the actual Lipschitz constant is different from , it is empirically stable. (see Fig. 4)

7 Conclusions & Practical Guide

We have reviewed and extended various algorithms for computing and minimizing the Wasserstein distance between distributions as part of a large generative systems. To make use of those insights in ones problems is to look at the Wasserstein distance between indicative batches, e.g. the initial batches produced by the generator and batches from the data distribution. This also gives a way to gauge how long a NN will take to converge and which hyperparameters have an impact on the estimation. Estimating the Wasserstein distance on indicative batches can safely be done with a regularized solver, due to the small differences in the Wasserstein estimates. For entropy regularization, we encourage to use proximal regularization. If the full batch estimation of the gradient is sufficient, then using a full batch GANs provides reliable results. However, for most GAN benchmarks this is not the case and then Gradient Penalty tends to work well, but is really slow. WGAN-SN is a lot faster, but mathematically incorrect. We propose a theoretically sound version of this, while showing similar performance on CIFAR. The cost function used in the Wasserstein distance controls the geometry of the generated manifold and therefore determines the interpolations between the images. The high cost between different samples taken from the same dataset shows problems with current non-adversarial cost functions on generative tasks and is a first step towards modelling better cost functions.


  • [1] M. Arjovsky and L. Bottou (2017) Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862. Cited by: §1.
  • [2] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §1.
  • [3] A. Beck and M. Teboulle (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences 2 (1), pp. 183–202. Cited by: §3.2.
  • [4] J. Bigot, E. Cazelles, and N. Papadakis (2017) Central limit theorems for sinkhorn divergence between probability distributions on finite spaces and statistical applications. arXiv preprint arXiv:1711.08947. Cited by: §5.3.
  • [5] M. Blondel, V. Seguy, and A. Rolet (2017) Smooth and sparse optimal transport. arXiv preprint arXiv:1710.06276. Cited by: 3rd item, §3.2, §3, §8.2.
  • [6] A. Chambolle and T. Pock (2011) A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of mathematical imaging and vision 40 (1), pp. 120–145. Cited by: §3.1.
  • [7] J. Chen, J. Chen, H. Chao, and M. Yang (2018) Image blind denoising with generative adversarial network based noise modeling. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 3155–3164. Cited by: §1.
  • [8] M. Cuturi (2013) Sinkhorn distances: lightspeed computation of optimal transport. In Advances in neural information processing systems, pp. 2292–2300. Cited by: item 1iA, §2, §3.2.
  • [9] A. Genevay, G. Peyré, and M. Cuturi (2018) Learning generative models with sinkhorn divergences. In

    International Conference on Artificial Intelligence and Statistics

    pp. 1608–1617. Cited by: §3.2, §5.4, Table 1, Table 2.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
  • [11] H. Gouk, E. Frank, B. Pfahringer, and M. Cree (2018) Regularisation of neural networks by enforcing lipschitz continuity. arXiv preprint arXiv:1804.04368. Cited by: §4.2.
  • [12] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5769–5779. Cited by: item 2(a)i, §4.1, §5.
  • [13] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: §5.4.
  • [14] T. Karras, S. Laine, and T. Aila (2018) A style-based generator architecture for generative adversarial networks. arXiv preprint arXiv:1812.04948. Cited by: §1.
  • [15] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017)

    Photo-realistic single image super-resolution using a generative adversarial network

    arXiv preprint. Cited by: §1.
  • [16] M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet (2018) Are gans created equal? a large-scale study. In Advances in neural information processing systems, pp. 700–709. Cited by: §1, 3rd item.
  • [17] G. Luise, A. Rudi, M. Pontil, and C. Ciliberto (2018) Differential properties of sinkhorn approximation for learning with wasserstein distance. In Advances in Neural Information Processing Systems, pp. 5859–5870. Cited by: §3.2.
  • [18] L. Mescheder, S. Nowozin, and A. Geiger (2017) The numerics of gans. In Advances in Neural Information Processing Systems, pp. 1823–1833. Cited by: §1.
  • [19] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957. Cited by: item 2(b)i, §4.2, §5.4, Table 2, §5, 3rd item, Figure 7, §8.4, §8.4.
  • [20] G. Peyré, M. Cuturi, et al. (2019) Computational optimal transport.

    Foundations and Trends® in Machine Learning

    11 (5-6), pp. 355–607.
    Cited by: §4.
  • [21] Y. Qin, N. Mitra, and P. Wonka (2018) Do gan loss functions really matter?. arXiv preprint arXiv:1811.09567. Cited by: §4.
  • [22] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §5.
  • [23] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016) Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2234–2242. Cited by: §5.4.
  • [24] T. Salimans, H. Zhang, A. Radford, and D. Metaxas (2018) Improving gans using optimal transport. arXiv preprint arXiv:1803.05573. Cited by: §3, §5.1, §5.4.
  • [25] M. Sanjabi, J. Ba, M. Razaviyayn, and J. D. Lee (2018) On the convergence and robustness of training gans with regularized optimal transport. In Advances in Neural Information Processing Systems, pp. 7091–7101. Cited by: §3.
  • [26] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb (2017) Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2107–2116. Cited by: §1.
  • [27] S. Singh and B. Póczos (2018) Minimax distribution estimation in wasserstein distance. arXiv preprint arXiv:1802.08855. Cited by: §5.3.
  • [28] C. Villani (2008) Optimal transport: old and new. Vol. 338, Springer Science & Business Media. Cited by: §2.
  • [29] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §5.1.
  • [30] Y. Xie, X. Wang, R. Wang, and H. Zha (2018) A fast proximal point method for wasserstein distance. arXiv preprint arXiv:1802.04307. Cited by: item 1iB, §3.2, §3.
  • [31] H. Zhao, O. Gallo, I. Frosio, and J. Kautz (2017) Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging 3 (1), pp. 47–57. Cited by: 2nd item.
  • [32] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017)

    Unpaired image-to-image translation using cycle-consistent adversarial networks

    arXiv preprint. Cited by: §1.

8 Supplementary Material

8.1 Derivation of the PDHG algorithm

The initial starting formulation is given as follows:


By forming the Lagrangian of this formulation and reformulating this yields:


By reshaping to one dimensional vectors and by combining the constraints , to the following saddle point formulation is obtained:


The iterates are given in Alg. 2.

8.2 Derivation of quadratic regularization

The following problem is solved in this section:


To solve this we make use of the dual formulation as shown by Blondel et al. [5] as follows:


They also showed that by using the convex conjugate this is reformulated to the following dual problem:


To solve for , we propose to use Alg. 5.

8.3 Calculation of FISTA-CENTER

The following problem is solved in this section:


for . This is simplified by the next sequence of equations:


Plugging this back into the formulation shown in Eq. (22) yields:


By solving for analytically, the following result is obtained:


Plugging this back into the initial equation and setting finally results to:


Here, the gradient with respect to is simply:


The gradient with respect to is given in a similar fashion. Using the gradient, we apply the FISTA algorithm again to solve for . The new transport plan for this algorithm is then given by . This way it is possible to improve on the solution of the squared regularization for a given

, by changing the center of the algorithm. Also the algorithm is easily implemented in current deep learning frameworks like tensorflow.

8.4 Spectral Norm Regularization

This is in contrast to the actual SN-GAN [19], where the derivative is calculated through the algorithm. In the context of the Wasserstein GAN, however Fig. 6 demonstrates that the projection approach empirically does not even work for the simplest examples.

(a) Projection
(b) Projection Layer
Figure 6: Here, the setting encapsulates two data points in red and two generated points in white. A NN with 4 hidden layers each with 10 neurons and ReLU activation was trained using gradient descent to calculate the Wasserstein distance, between those points. The function values of the NNs on this space are visualized and the corresponding gradient used by a generator network to learn where to move those points. As is demonstrated in the figure by using a projected gradient descent the NN gets stuck in a local minima and is unable to learn the non-linear function required.

Fig. 7 is a demonstration, that the Lipschitz constant of the WGAN-SN increases with the depth of the network, but not with the filter kernels. That the performance of the discriminator increases as you increase the filter weight is shown in the experimental results by Miyato et al. [19] to demonstrate the wide applicability of their algorithm.

(a) Increasing Layer Numbers
(b) Increasing Feature Maps
Figure 7: Gradient norm of points in space of a NN regularized by the convolutional spectral norm (WGAN-SNC) and the matrix spectral norm (WGAN-SN [19]) calculating the Wasserstein distance between two batches taken from the CIFAR dataset. The desired behaviour would be for those norms to be equal to . Notice that the matrix spectral norm is incorrect and the grad norm is .

8.5 Comparison Image Reconstruction Quality

In Fig. 8 we show the reconstruction capabilities of our full batch algorithms. Therein, are reconstructed using the GAN algorithm with a fixed noise set. The estimated Wasserstein distance is then used to learn to reconstruct this dataset using a standard NN.

In Fig. 9 we show the failure mode of full batch methods, if the batch size is insufficient for the cost function, which in this case is and L2-norm. In Section 5 in Fig. 5 it is shown that the Wasserstein distance using the L2-norm is , while those images produce a Wasserstein distance of .

(a) FISTA-Center
(b) Sinkhorn-Center
(c) Sinkhorn
(d) FISTA:
(e) WGAN GP:
(f) WGAN SN:
Figure 8: A fixed set of noise samples is used to generate the dataset.
Figure 9: Sample images of the Sinkhorn solver on CIFAR using the L2-norm. The Wasserstein distance here is

8.6 List of Algorithms

Result: The final transport map:
while  do
end while
Algorithm 2 PDHG: Requires scaling factor , with default value and the cost vector .
Result: The final transport map:
for k =  do
end for
Algorithm 3 Sinkhorn: Requires the cost matrix and the regularization factor .
Result: The final transport map:
for k =  do
end for
Algorithm 4 Sinkhorn-Center: Requires the cost matrix and the regularization factor .
Result: The final transport map: with
for k =  do
end for
Algorithm 5 FISTA: Requires the cost matrix and the regularization factor .
Result: The final transport map:
for k =  do
       for l =  do
       end for
end for
Algorithm 6 FISTA-Center: Requires the cost matrix , number of inner iterations (default ) and the regularization factor .