DeepAI AI Chat
Log In Sign Up

Banach Wasserstein GAN

by   Jonas Adler, et al.
University of Cambridge
KTH Royal Institute of Technology

Wasserstein Generative Adversarial Networks (WGANs) can be used to generate realistic samples from complicated image distributions. The Wasserstein metric used in WGANs is based on a notion of distance between individual images, which induces a notion of distance between probability distributions of images. So far the community has considered ℓ^2 as the underlying distance. We generalize the theory of WGAN with gradient penalty to Banach spaces, allowing practitioners to select the features to emphasize in the generator. We further discuss the effect of some particular choices of underlying norms, focusing on Sobolev norms. Finally, we demonstrate the impact of the choice of norm on model performance and show state-of-the-art inception scores for non-progressive growing GANs on CIFAR-10.


page 7

page 12

page 13

page 14

page 15


From GAN to WGAN

This paper explains the math behind a generative adversarial network (GA...

Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs

We generalize the concept of maximum-margin classifiers (MMCs) to arbitr...

Demystifying MMD GANs

We investigate the training and performance of generative adversarial ne...

Understanding Entropic Regularization in GANs

Generative Adversarial Networks are a popular method for learning distri...

Improving Human Image Synthesis with Residual Fast Fourier Transformation and Wasserstein Distance

With the rapid development of the Metaverse, virtual humans have emerged...

Principled Interpolation in Normalizing Flows

Generative models based on normalizing flows are very successful in mode...

DAG-WGAN: Causal Structure Learning With Wasserstein Generative Adversarial Networks

The combinatorial search space presents a significant challenge to learn...