How Well Do WGANs Estimate the Wasserstein Metric?

10/09/2019
by   Anton Mallasto, et al.
16

Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution. Recently, a popular choice for the similarity measure has been the Wasserstein metric, which can be expressed in the Kantorovich duality formulation as the optimum difference of the expected values of a potential function under the real data distribution and the model hypothesis. In practice, the potential is approximated with a neural network and is called the discriminator. Duality constraints on the function class of the discriminator are enforced approximately, and the expectations are estimated from samples. This gives at least three sources of errors: the approximated discriminator and constraints, the estimation of the expectation value, and the optimization required to find the optimal potential. In this work, we study how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform. We consider, in particular, the c-transform formulation, which eliminates the need to enforce the constraints explicitly. We demonstrate that the c-transform allows for a more accurate estimation of the true Wasserstein metric from samples, but surprisingly, does not perform the best in the generative setting.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

page 21

page 22

page 23

10/28/2018

A Convex Duality Framework for GANs

Generative adversarial network (GAN) is a minimax game between a generat...
10/27/2021

Training Wasserstein GANs without gradient penalties

We propose a stable method to train Wasserstein generative adversarial n...
02/22/2018

Solving Approximate Wasserstein GANs to Stationarity

Generative Adversarial Networks (GANs) are one of the most practical str...
11/24/2017

Wasserstein Introspective Neural Networks

We present Wasserstein introspective neural networks (WINN) that are bot...
10/30/2017

Implicit Manifold Learning on Generative Adversarial Networks

This paper raises an implicit manifold learning perspective in Generativ...
06/01/2018

Equivalence Between Wasserstein and Value-Aware Model-based Reinforcement Learning

Learning a generative model is a key component of model-based reinforcem...
01/08/2019

On Relativistic f-Divergences

This paper provides a more rigorous look at Relativistic Generative Adve...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.