Complexity Controlled Generative Adversarial Networks

11/20/2020
by   Himanshu Pant, et al.
4

One of the issues faced in training Generative Adversarial Nets (GANs) and their variants is the problem of mode collapse, wherein the training stability in terms of the generative loss increases as more training data is used. In this paper, we propose an alternative architecture via the Low-Complexity Neural Network (LCNN), which attempts to learn models with low complexity. The motivation is that controlling model complexity leads to models that do not overfit the training data. We incorporate the LCNN loss function for GANs, Deep Convolutional GANs (DCGANs) and Spectral Normalized GANs (SNGANs), in order to develop hybrid architectures called the LCNN-GAN, LCNN-DCGAN and LCNN-SNGAN respectively. On various large benchmark image datasets, we show that the use of our proposed models results in stable training while avoiding the problem of mode collapse, resulting in better training stability. We also show how the learning behavior can be controlled by a hyperparameter in the LCNN functional, which also provides an improved inception score.

READ FULL TEXT

page 4

page 5

page 6

page 8

page 9

page 10

page 11

research
06/03/2020

Rényi Generative Adversarial Networks

We propose a loss function for generative adversarial networks (GANs) us...
research
04/12/2018

MGGAN: Solving Mode Collapse using Manifold Guided Training

Mode collapse is a critical problem in training generative adversarial n...
research
02/07/2018

Stochastic Deconvolutional Neural Network Ensemble Training on Generative Pseudo-Adversarial Networks

The training of Generative Adversarial Networks is a difficult task main...
research
07/25/2022

Stable Parallel Training of Wasserstein Conditional Generative Adversarial Neural Networks

We propose a stable, parallel approach to train Wasserstein Conditional ...
research
10/29/2019

Kernel-Guided Training of Implicit Generative Models with Stability Guarantees

Modern implicit generative models such as generative adversarial network...
research
04/15/2020

Effect of Input Noise Dimension in GANs

Generative Adversarial Networks (GANs) are by far the most successful ge...
research
06/15/2020

Reciprocal Adversarial Learning via Characteristic Functions

Generative adversarial nets (GANs) have become a preferred tool for acco...

Please sign up or login with your details

Forgot password? Click here to reset