Log In Sign Up

Decentralized Parallel Algorithm for Training Generative Adversarial Nets

by   Mingrui Liu, et al.

Generative Adversarial Networks (GANs) are powerful class of generative models in the deep learning community. Current practice on large-scale GAN training <cit.> utilizes large models and distributed large-batch training strategies, and is implemented on deep learning frameworks (e.g., TensorFlow, PyTorch, etc.) designed in a centralized manner. In the centralized network topology, every worker needs to communicate with the central node. However, when the network bandwidth is low or network latency is high, the performance would be significantly degraded. Despite recent progress on decentralized algorithms for training deep neural networks, it remains unclear whether it is possible to train GANs in a decentralized manner. In this paper, we design a decentralized algorithm for solving a class of non-convex non-concave min-max problem with provable guarantee. Experimental results on GANs demonstrate the effectiveness of the proposed algorithm.


page 1

page 2

page 3

page 4


A Decentralized Adaptive Momentum Method for Solving a Class of Min-Max Optimization Problems

Min-max saddle point games have recently been intensely studied, due to ...

Decentralized Local Stochastic Extra-Gradient for Variational Inequalities

We consider decentralized stochastic variational inequalities where the ...

Collaborative Deep Learning Across Multiple Data Centers

Valuable training data is often owned by independent organizations and l...

Parallel/distributed implementation of cellular training for generative adversarial neural networks

Generative adversarial networks (GANs) are widely used to learn generati...

Forward Super-Resolution: How Can GANs Learn Hierarchical Generative Models for Real-World Distributions

Generative adversarial networks (GANs) are among the most successful mod...

Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets

Adaptive gradient algorithms perform gradient-based updates using the hi...

Decentralization Meets Quantization

Optimizing distributed learning systems is an art of balancing between c...