DeepAI AI Chat
Log In Sign Up

Banach Wasserstein GAN

06/18/2018
by   Jonas Adler, et al.
University of Cambridge
KTH Royal Institute of Technology
2

Wasserstein Generative Adversarial Networks (WGANs) can be used to generate realistic samples from complicated image distributions. The Wasserstein metric used in WGANs is based on a notion of distance between individual images, which induces a notion of distance between probability distributions of images. So far the community has considered ℓ^2 as the underlying distance. We generalize the theory of WGAN with gradient penalty to Banach spaces, allowing practitioners to select the features to emphasize in the generator. We further discuss the effect of some particular choices of underlying norms, focusing on Sobolev norms. Finally, we demonstrate the impact of the choice of norm on model performance and show state-of-the-art inception scores for non-progressive growing GANs on CIFAR-10.

READ FULL TEXT

page 7

page 12

page 13

page 14

page 15

04/18/2019

From GAN to WGAN

This paper explains the math behind a generative adversarial network (GA...
10/15/2019

Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs

We generalize the concept of maximum-margin classifiers (MMCs) to arbitr...
01/04/2018

Demystifying MMD GANs

We investigate the training and performance of generative adversarial ne...
11/02/2021

Understanding Entropic Regularization in GANs

Generative Adversarial Networks are a popular method for learning distri...
05/24/2022

Improving Human Image Synthesis with Residual Fast Fourier Transformation and Wasserstein Distance

With the rapid development of the Metaverse, virtual humans have emerged...
10/22/2020

Principled Interpolation in Normalizing Flows

Generative models based on normalizing flows are very successful in mode...
04/01/2022

DAG-WGAN: Causal Structure Learning With Wasserstein Generative Adversarial Networks

The combinatorial search space presents a significant challenge to learn...