A Wasserstein GAN model with the total variational regularization

12/03/2018
by   Moshe Y. Vardi, et al.
0

It is well known that the generative adversarial nets (GANs) are remarkably difficult to train. The recently proposed Wasserstein GAN (WGAN) creates principled research directions towards addressing these issues. But we found in practice that gradient penalty WGANs (GP-WGANs) still suffer from training instability. In this paper, we combine a Total Variational (TV) regularizing term into the WGAN formulation instead of weight clipping or gradient penalty, which implies that the Lipschitz constraint is enforced on the critic network. Our proposed method is more stable at training than GP-WGANs and works well across varied GAN architectures. We also present a method to control the trade-off between image diversity and visual quality. It does not bring any computation burden.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset