DeepAI AI Chat
Log In Sign Up

How Well Do WGANs Estimate the Wasserstein Metric?

by   Anton Mallasto, et al.

Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution. Recently, a popular choice for the similarity measure has been the Wasserstein metric, which can be expressed in the Kantorovich duality formulation as the optimum difference of the expected values of a potential function under the real data distribution and the model hypothesis. In practice, the potential is approximated with a neural network and is called the discriminator. Duality constraints on the function class of the discriminator are enforced approximately, and the expectations are estimated from samples. This gives at least three sources of errors: the approximated discriminator and constraints, the estimation of the expectation value, and the optimization required to find the optimal potential. In this work, we study how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform. We consider, in particular, the c-transform formulation, which eliminates the need to enforce the constraints explicitly. We demonstrate that the c-transform allows for a more accurate estimation of the true Wasserstein metric from samples, but surprisingly, does not perform the best in the generative setting.


page 13

page 21

page 22

page 23


A Convex Duality Framework for GANs

Generative adversarial network (GAN) is a minimax game between a generat...

Training Wasserstein GANs without gradient penalties

We propose a stable method to train Wasserstein generative adversarial n...

Solving Approximate Wasserstein GANs to Stationarity

Generative Adversarial Networks (GANs) are one of the most practical str...

Wasserstein Archetypal Analysis

Archetypal analysis is an unsupervised machine learning method that summ...

Wasserstein Introspective Neural Networks

We present Wasserstein introspective neural networks (WINN) that are bot...

Implicit Manifold Learning on Generative Adversarial Networks

This paper raises an implicit manifold learning perspective in Generativ...

On Relativistic f-Divergences

This paper provides a more rigorous look at Relativistic Generative Adve...