Towards Efficient and Unbiased Implementation of Lipschitz Continuity in GANs

04/02/2019
by   Zhiming Zhou, et al.
0

Lipschitz continuity recently becomes popular in generative adversarial networks (GANs). It was observed that the Lipschitz regularized discriminator leads to improved training stability and sample quality. The mainstream implementations of Lipschitz continuity include gradient penalty and spectral normalization. In this paper, we demonstrate that gradient penalty introduces undesired bias, while spectral normalization might be over restrictive. We accordingly propose a new method which is efficient and unbiased. Our experiments verify our analysis and show that the proposed method is able to achieve successful training in various situations where gradient penalty and spectral normalization fail.

READ FULL TEXT

page 5

page 6

research
09/06/2021

Gradient Normalization for Generative Adversarial Networks

In this paper, we propose a novel normalization method called gradient n...
research
04/06/2021

Generalization of GANs under Lipschitz continuity and data augmentation

Generative adversarial networks (GANs) have been being widely used in va...
research
09/06/2020

Why Spectral Normalization Stabilizes GANs: Analysis and Improvements

Spectral normalization (SN) is a widely-used technique for improving the...
research
10/31/2022

Lipschitz regularized gradient flows and latent generative particles

Lipschitz regularized f-divergences are constructed by imposing a bound ...
research
04/02/2020

Controllable Orthogonalization in Training DNNs

Orthogonality is widely used for training deep neural networks (DNNs) du...
research
03/25/2021

About the regularity of the discriminator in conditional WGANs

Training of conditional WGANs is usually done by averaging the underlyin...
research
09/18/2021

Manifold-preserved GANs

Generative Adversarial Networks (GANs) have been widely adopted in vario...

Please sign up or login with your details

Forgot password? Click here to reset