A Distributed Training Algorithm of Generative Adversarial Networks with Quantized Gradients

10/26/2020
by   Xiaojun Chen, et al.
0

Training generative adversarial networks (GAN) in a distributed fashion is a promising technology since it is contributed to training GAN on a massive of data efficiently in real-world applications. However, GAN is known to be difficult to train by SGD-type methods (may fail to converge) and the distributed SGD-type methods may also suffer from massive amount of communication cost. In this paper, we propose a distributed GANs training algorithm with quantized gradient, dubbed DQGAN, which is the first distributed training method with quantized gradient for GANs. The new method trains GANs based on a specific single machine algorithm called Optimistic Mirror Descent (OMD) algorithm, and is applicable to any gradient compression method that satisfies a general δ-approximate compressor. The error-feedback operation we designed is used to compensate for the bias caused by the compression, and moreover, ensure the convergence of the new method. Theoretically, we establish the non-asymptotic convergence of DQGAN algorithm to first-order stationary point, which shows that the proposed algorithm can achieve a linear speedup in the parameter server model. Empirically, our experiments show that our DQGAN algorithm can reduce the communication cost and save the training time with slight performance degradation on both synthetic and real datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset