Log In Sign Up

Training Generative Adversarial Networks via stochastic Nash games

by   Barbara Franci, et al.

Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks: the generator and the discriminator. These two neural networks compete against each other through an adversarial process that can be modelled as a stochastic Nash equilibrium problem. Since the associated training process is challenging, it is fundamental to design reliable algorithms to compute an equilibrium. In this paper, we propose a stochastic relaxed forward-backward algorithm for GANs and we show convergence to an exact solution or to a neighbourhood of it, if the pseudogradient mapping of the game is monotone. We apply our algorithm to the image generation problem where we observe computational advantages with respect to the extragradient scheme.


A game-theoretic approach for Generative Adversarial Networks

Generative adversarial networks (GANs) are a class of generative models,...

HessianFR: An Efficient Hessian-based Follow-the-Ridge Algorithm for Minimax Optimization

Wide applications of differentiable two-player sequential games (e.g., i...

Training Generative Adversarial Networks with Weights

The impressive success of Generative Adversarial Networks (GANs) is ofte...

Finding Mixed Nash Equilibria of Generative Adversarial Networks

We reconsider the training objective of Generative Adversarial Networks ...

DO-GAN: A Double Oracle Framework for Generative Adversarial Networks

In this paper, we propose a new approach to train Generative Adversarial...

Parallel/distributed implementation of cellular training for generative adversarial neural networks

Generative adversarial networks (GANs) are widely used to learn generati...

BEGAN: Boundary Equilibrium Generative Adversarial Networks

We propose a new equilibrium enforcing method paired with a loss derived...