Improved Training of Wasserstein GANs

03/31/2017 ∙ by Ishaan Gulrajani, et al. ∙ 0

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but can still generate low-quality samples or fail to converge in some settings. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to pathological behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

page 8

page 15

page 16

page 17

page 18

page 19

Code Repositories

cycle-gan-tf

Reimplementation of cycle-gan(https://arxiv.org/pdf/1703.10593.pdf) with improved w-gan(https://arxiv.org/abs/1704.00028) loss in tensorflow.


view repo

Text-to-Image-Synthesis

Pytorch implementation of Generative Adversarial Text-to-Image Synthesis paper


view repo

pytorch-wgan-gp

PyTorch implementation of "Improved Training of Wasserstein GANs", arxiv:1704.00028


view repo

unified-gan-tensorflow

A Tensorflow implementation of GAN, WGAN and WGAN with gradient penalty.


view repo

CERN_project

GANs for simulation of electromagnetic showers in the ATLAS calorimeter


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.