Training Generative Adversarial Networks with Adaptive Composite Gradient

11/10/2021
by   Huiqing Qi, et al.
21

The wide applications of Generative adversarial networks benefit from the successful training methods, guaranteeing that an object function converges to the local minima. Nevertheless, designing an efficient and competitive training method is still a challenging task due to the cyclic behaviors of some gradient-based ways and the expensive computational cost of these methods based on the Hessian matrix. This paper proposed the adaptive Composite Gradients (ACG) method, linearly convergent in bilinear games under suitable settings. Theory and toy-function experiments suggest that our approach can alleviate the cyclic behaviors and converge faster than recently proposed algorithms. Significantly, the ACG method is not only used to find stable fixed points in bilinear games as well as in general games. The ACG method is a novel semi-gradient-free algorithm since it does not need to calculate the gradient of each step, reducing the computational cost of gradient and Hessian by utilizing the predictive information in future iterations. We conducted two mixture of Gaussians experiments by integrating ACG to existing algorithms with Linear GANs. Results show ACG is competitive with the previous algorithms. Realistic experiments on four prevalent data sets (MNIST, Fashion-MNIST, CIFAR-10, and CelebA) with DCGANs show that our ACG method outperforms several baselines, which illustrates the superiority and efficacy of our method.

READ FULL TEXT

page 12

page 13

page 18

page 20

page 21

page 22

research
02/24/2019

Training GANs with Centripetal Acceleration

Training generative adversarial networks (GANs) often suffers from cycli...
research
05/23/2022

HessianFR: An Efficient Hessian-based Follow-the-Ridge Algorithm for Minimax Optimization

Wide applications of differentiable two-player sequential games (e.g., i...
research
10/07/2020

Training GANs with predictive projection centripetal acceleration

Although remarkable successful in practice, training generative adversar...
research
02/15/2018

The Mechanics of n-Player Differentiable Games

The cornerstone underpinning deep learning is the guarantee that gradien...
research
05/13/2019

Differentiable Game Mechanics

Deep learning is built on the foundational guarantee that gradient desce...
research
10/29/2022

Recursive Reasoning in Minimax Games: A Level k Gradient Play Method

Despite the success of generative adversarial networks (GANs) in generat...
research
06/25/2020

Taming GANs with Lookahead

Generative Adversarial Networks are notoriously challenging to train. Th...

Please sign up or login with your details

Forgot password? Click here to reset