First Order Generative Adversarial Networks

02/13/2018
by   Calvin Seward, et al.
0

GANs excel at learning high dimensional distributions, but they can update generator parameters in directions that do not correspond to the steepest descent direction of the objective. Prominent examples of problematic update directions include those used in both Goodfellow's original GAN and the WGAN-GP. To formally describe an optimal update direction, we introduce a theoretical framework which allows the derivation of requirements on both the divergence and corresponding method for determining an update direction. These requirements guarantee unbiased mini-batch updates in the direction of steepest descent. We propose a novel divergence which approximates the Wasserstein distance while regularizing the critic's first order information. Together with an accompanying update direction, this divergence fulfills the requirements for unbiased steepest descent updates. We verify our method, the First Order GAN, with CelebA image generation and set a new state of the art on the One Billion Word language generation task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset