A Distributed Training Algorithm of Generative Adversarial Networks with Quantized Gradients

10/26/2020
by   Xiaojun Chen, et al.
0

Training generative adversarial networks (GAN) in a distributed fashion is a promising technology since it is contributed to training GAN on a massive of data efficiently in real-world applications. However, GAN is known to be difficult to train by SGD-type methods (may fail to converge) and the distributed SGD-type methods may also suffer from massive amount of communication cost. In this paper, we propose a distributed GANs training algorithm with quantized gradient, dubbed DQGAN, which is the first distributed training method with quantized gradient for GANs. The new method trains GANs based on a specific single machine algorithm called Optimistic Mirror Descent (OMD) algorithm, and is applicable to any gradient compression method that satisfies a general δ-approximate compressor. The error-feedback operation we designed is used to compensate for the bias caused by the compression, and moreover, ensure the convergence of the new method. Theoretically, we establish the non-asymptotic convergence of DQGAN algorithm to first-order stationary point, which shows that the proposed algorithm can achieve a linear speedup in the parameter server model. Empirically, our experiments show that our DQGAN algorithm can reduce the communication cost and save the training time with slight performance degradation on both synthetic and real datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2020

On Communication Compression for Distributed Optimization on Heterogeneous Data

Lossy gradient compression, with either unbiased or biased compressors, ...
research
01/10/2019

Quantized Epoch-SGD for Communication-Efficient Distributed Learning

Due to its efficiency and ease to implement, stochastic gradient descent...
research
06/21/2018

Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization

Large-scale distributed optimization is of great importance in various a...
research
04/29/2020

Quantized Adam with Error Feedback

In this paper, we present a distributed variant of adaptive stochastic g...
research
06/24/2018

JR-GAN: Jacobian Regularization for Generative Adversarial Networks

Generative adversarial networks (GANs) are notoriously difficult to trai...
research
01/24/2019

QGAN: Quantized Generative Adversarial Networks

The intensive computation and memory requirements of generative adversar...
research
02/10/2021

Signal Propagation in a Gradient-Based and Evolutionary Learning System

Generative adversarial networks (GANs) exhibit training pathologies that...

Please sign up or login with your details

Forgot password? Click here to reset