How Well Do WGANs Estimate the Wasserstein Metric?

10/09/2019
by   Anton Mallasto, et al.
16

Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution. Recently, a popular choice for the similarity measure has been the Wasserstein metric, which can be expressed in the Kantorovich duality formulation as the optimum difference of the expected values of a potential function under the real data distribution and the model hypothesis. In practice, the potential is approximated with a neural network and is called the discriminator. Duality constraints on the function class of the discriminator are enforced approximately, and the expectations are estimated from samples. This gives at least three sources of errors: the approximated discriminator and constraints, the estimation of the expectation value, and the optimization required to find the optimal potential. In this work, we study how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform. We consider, in particular, the c-transform formulation, which eliminates the need to enforce the constraints explicitly. We demonstrate that the c-transform allows for a more accurate estimation of the true Wasserstein metric from samples, but surprisingly, does not perform the best in the generative setting.

READ FULL TEXT

page 13

page 21

page 22

page 23

research
10/28/2018

A Convex Duality Framework for GANs

Generative adversarial network (GAN) is a minimax game between a generat...
research
10/27/2021

Training Wasserstein GANs without gradient penalties

We propose a stable method to train Wasserstein generative adversarial n...
research
02/22/2018

Solving Approximate Wasserstein GANs to Stationarity

Generative Adversarial Networks (GANs) are one of the most practical str...
research
10/25/2022

Wasserstein Archetypal Analysis

Archetypal analysis is an unsupervised machine learning method that summ...
research
11/24/2017

Wasserstein Introspective Neural Networks

We present Wasserstein introspective neural networks (WINN) that are bot...
research
10/30/2017

Implicit Manifold Learning on Generative Adversarial Networks

This paper raises an implicit manifold learning perspective in Generativ...

Please sign up or login with your details

Forgot password? Click here to reset