DeepAI
Log In Sign Up

Training Generative Adversarial Networks via stochastic Nash games

10/17/2020
by   Barbara Franci, et al.
0

Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks: the generator and the discriminator. These two neural networks compete against each other through an adversarial process that can be modelled as a stochastic Nash equilibrium problem. Since the associated training process is challenging, it is fundamental to design reliable algorithms to compute an equilibrium. In this paper, we propose a stochastic relaxed forward-backward algorithm for GANs and we show convergence to an exact solution or to a neighbourhood of it, if the pseudogradient mapping of the game is monotone. We apply our algorithm to the image generation problem where we observe computational advantages with respect to the extragradient scheme.

READ FULL TEXT
03/30/2020

A game-theoretic approach for Generative Adversarial Networks

Generative adversarial networks (GANs) are a class of generative models,...
05/23/2022

HessianFR: An Efficient Hessian-based Follow-the-Ridge Algorithm for Minimax Optimization

Wide applications of differentiable two-player sequential games (e.g., i...
11/06/2018

Training Generative Adversarial Networks with Weights

The impressive success of Generative Adversarial Networks (GANs) is ofte...
10/23/2018

Finding Mixed Nash Equilibria of Generative Adversarial Networks

We reconsider the training objective of Generative Adversarial Networks ...
02/17/2021

DO-GAN: A Double Oracle Framework for Generative Adversarial Networks

In this paper, we propose a new approach to train Generative Adversarial...
04/07/2020

Parallel/distributed implementation of cellular training for generative adversarial neural networks

Generative adversarial networks (GANs) are widely used to learn generati...
03/31/2017

BEGAN: Boundary Equilibrium Generative Adversarial Networks

We propose a new equilibrium enforcing method paired with a loss derived...