DeepAI
Log In Sign Up

Parallel/distributed implementation of cellular training for generative adversarial neural networks

04/07/2020
by   Emiliano Perez, et al.
0

Generative adversarial networks (GANs) are widely used to learn generative models. GANs consist of two networks, a generator and a discriminator, that apply adversarial learning to optimize their parameters. This article presents a parallel/distributed implementation of a cellular competitive coevolutionary method to train two populations of GANs. A distributed memory parallel implementation is proposed for execution in high performance/supercomputing centers. Efficient results are reported on addressing the generation of handwritten digits (MNIST dataset samples). Moreover, the proposed implementation is able to reduce the training times and scale properly when considering different grid sizes for training.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/30/2020

A game-theoretic approach for Generative Adversarial Networks

Generative adversarial networks (GANs) are a class of generative models,...
10/17/2020

Training Generative Adversarial Networks via stochastic Nash games

Generative adversarial networks (GANs) are a class of generative models ...
11/30/2018

Lipizzaner: A System That Scales Robust Generative Adversarial Network Training

GANs are difficult to train due to convergence pathologies such as mode ...
11/22/2020

Generative Adversarial Stacked Autoencoders

Generative Adversarial Networks (GANs) have become predominant in image ...
10/28/2019

Decentralized Parallel Algorithm for Training Generative Adversarial Nets

Generative Adversarial Networks (GANs) are powerful class of generative ...
09/05/2017

Linking Generative Adversarial Learning and Binary Classification

In this note, we point out a basic link between generative adversarial (...
11/27/2016

Handwriting Profiling using Generative Adversarial Networks

Handwriting is a skill learned by humans from a very early age. The abil...