DeepAI
Log In Sign Up

A Closer Look at the Optimization Landscapes of Generative Adversarial Networks

06/11/2019
by   Hugo Berard, et al.
1

Generative adversarial networks have been very successful in generative modeling, however they remain relatively hard to optimize compared to standard deep neural networks. In this paper, we try to gain insight into the optimization of GANs by looking at the game vector field resulting from the concatenation of the gradient of both players. Based on this point of view, we propose visualization techniques that allow us to make the following empirical observations. First, the training of GANs suffers from rotational behavior around locally stable stationary points, which, as we show, corresponds to the presence of imaginary components in the eigenvalues of the Jacobian of the game. Secondly, GAN training seems to converge to a stable stationary point which is a saddle point for the generator loss, not a minimum, while still achieving excellent performance. This counter-intuitive yet persistent observation questions whether we actually need a Nash equilibrium to get good performance in GANs.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/30/2020

A game-theoretic approach for Generative Adversarial Networks

Generative adversarial networks (GANs) are a class of generative models,...
06/18/2018

Beyond Local Nash Equilibria for Adversarial Networks

Save for some special cases, current training methods for Generative Adv...
01/28/2022

Using Constant Learning Rate of Two Time-Scale Update Rule for Training Generative Adversarial Networks

Previous numerical results have shown that a two time-scale update rule ...
02/17/2021

DO-GAN: A Double Oracle Framework for Generative Adversarial Networks

In this paper, we propose a new approach to train Generative Adversarial...
06/13/2021

Game of GANs: Game Theoretical Models for Generative Adversarial Networks

Generative Adversarial Network, as a promising research direction in the...
05/11/2021

Characterizing GAN Convergence Through Proximal Duality Gap

Despite the accomplishments of Generative Adversarial Networks (GANs) in...
06/25/2020

Taming GANs with Lookahead

Generative Adversarial Networks are notoriously challenging to train. Th...