Wasserstein GAN

01/26/2017
by   Martin Arjovsky, et al.
0

We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to other distances between distributions.

READ FULL TEXT

page 13

page 30

page 31

page 32

research
07/11/2018

On catastrophic forgetting and mode collapse in Generative Adversarial Networks

Generative Adversarial Networks (GAN) are one of the most prominent tool...
research
01/13/2018

Which Training Methods for GANs do actually Converge?

Recent work has shown local convergence of GAN training for absolutely c...
research
04/12/2018

MGGAN: Solving Mode Collapse using Manifold Guided Training

Mode collapse is a critical problem in training generative adversarial n...
research
06/11/2020

Cumulant GAN

Despite the continuous improvements of Generative Adversarial Networks (...
research
02/27/2017

McGan: Mean and Covariance Feature Matching GAN

We introduce new families of Integral Probability Metrics (IPM) for trai...
research
09/02/2020

Properties of f-divergences and f-GAN training

In this technical report we describe some properties of f-divergences an...
research
04/05/2020

Game of Learning Bloch Equation Simulations for MR Fingerprinting

Purpose: This work proposes a novel approach to efficiently generate MR ...

Please sign up or login with your details

Forgot password? Click here to reset