Improved Training of Wasserstein GANs

03/31/2017 ∙ by Ishaan Gulrajani, et al. ∙ 0

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but can still generate low-quality samples or fail to converge in some settings. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to pathological behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.



There are no comments yet.


page 5

page 7

page 8

page 15

page 16

page 17

page 18

page 19

Code Repositories


Reimplementation of cycle-gan( with improved w-gan( loss in tensorflow.

view repo


Pytorch implementation of Generative Adversarial Text-to-Image Synthesis paper

view repo


PyTorch implementation of "Improved Training of Wasserstein GANs", arxiv:1704.00028

view repo


A Tensorflow implementation of GAN, WGAN and WGAN with gradient penalty.

view repo


GANs for simulation of electromagnetic showers in the ATLAS calorimeter

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.