Effect of Input Noise Dimension in GANs

04/15/2020
by   Manisha Padala, et al.
23

Generative Adversarial Networks (GANs) are by far the most successful generative models. Learning the transformation which maps a low dimensional input noise to the data distribution forms the foundation for GANs. Although they have been applied in various domains, they are prone to certain challenges like mode collapse and unstable training. To overcome the challenges, researchers have proposed novel loss functions, architectures, and optimization methods. In our work here, unlike the previous approaches, we focus on the input noise and its role in the generation. We aim to quantitatively and qualitatively study the effect of the dimension of the input noise on the performance of GANs. For quantitative measures, typically Fréchet Inception Distance (FID) and Inception Score (IS) are used as performance measure on image data-sets. We compare the FID and IS values for DCGAN and WGAN-GP. We use three different image data-sets – each consisting of different levels of complexity. Through our experiments, we show that the right dimension of input noise for optimal results depends on the data-set and architecture used. We also observe that the state of the art performance measures does not provide enough useful insights. Hence we conclude that we need further theoretical analysis for understanding the relationship between the low dimensional distribution and the generated images. We also require better performance measures.

READ FULL TEXT

page 1

page 8

page 9

page 10

research
02/27/2020

Topology Distance: A Topology-Based Approach For Evaluating Generative Adversarial Networks

Automatic evaluation of the goodness of Generative Adversarial Networks ...
research
02/07/2018

Geometry Score: A Method For Comparing Generative Adversarial Networks

One of the biggest challenges in the research of generative adversarial ...
research
02/17/2021

Evolving GAN Formulations for Higher Quality Image Synthesis

Generative Adversarial Networks (GANs) have extended deep learning to co...
research
07/12/2019

Generative Modeling by Estimating Gradients of the Data Distribution

We introduce a new generative model where samples are produced via Lange...
research
11/20/2020

Complexity Controlled Generative Adversarial Networks

One of the issues faced in training Generative Adversarial Nets (GANs) a...
research
06/17/2020

Flows Succeed Where GANs Fail: Lessons from Low-Dimensional Data

Normalizing flows and generative adversarial networks (GANs) are both ap...
research
07/03/2017

Learning to Avoid Errors in GANs by Manipulating Input Spaces

Despite recent advances, large scale visual artifacts are still a common...

Please sign up or login with your details

Forgot password? Click here to reset