Towards Distributed Coevolutionary GANs

07/21/2018
by   Tom Schmiedlechner, et al.
0

Generative Adversarial Networks (GANs) have become one of the dominant methods for deep generative modeling. Despite their demonstrated success on multiple vision tasks, GANs are difficult to train and much research has been dedicated towards understanding and improving their gradient-based learning dynamics. Here, we investigate the use of coevolution, a class of black-box (gradient-free) co-optimization techniques and a powerful tool in evolutionary computing, as a supplement to gradient-based GAN training techniques. Experiments on a simple model that exhibits several of the GAN gradient-based dynamics (e.g., mode collapse, oscillatory behavior, and vanishing gradients) show that coevolution is a promising framework for escaping degenerate GAN training behaviors.

READ FULL TEXT

page 5

page 6

research
09/30/2019

Stabilizing Generative Adversarial Network Training: A Survey

Generative Adversarial Networks (GANs) are a type of Generative Models, ...
research
05/29/2018

On gradient regularizers for MMD GANs

We propose a principled method for gradient-based regularization of the ...
research
05/31/2023

Lottery Tickets in Evolutionary Optimization: On Sparse Backpropagation-Free Trainability

Is the lottery ticket phenomenon an idiosyncrasy of gradient-based train...
research
02/24/2019

Training GANs with Centripetal Acceleration

Training generative adversarial networks (GANs) often suffers from cycli...
research
09/13/2019

Deep Adversarial Belief Networks

We present a novel adversarial framework for training deep belief networ...
research
02/10/2021

Signal Propagation in a Gradient-Based and Evolutionary Learning System

Generative adversarial networks (GANs) exhibit training pathologies that...
research
10/10/2019

Visual Indeterminacy in Generative Neural Art

Why are GANs such powerful tools for making art? This essay argues that ...

Please sign up or login with your details

Forgot password? Click here to reset