GANs as Gradient Flows that Converge

05/05/2022
by   Yu-Jui Huang, et al.
0

This paper approaches the unsupervised learning problem by gradient descent in the space of probability density functions. Our main result shows that along the gradient flow induced by a distribution-dependent ordinary differential equation (ODE), the unknown data distribution emerges as the long-time limit of this flow of densities. That is, one can uncover the data distribution by simulating the distribution-dependent ODE. Intriguingly, we find that the simulation of the ODE is equivalent to the training of generative adversarial networks (GANs). The GAN framework, by definition a non-cooperative game between a generator and a discriminator, can therefore be viewed alternatively as a cooperative game between a navigator and a calibrator (in collaboration to simulate the ODE). At the theoretic level, this new perspective simplifies the analysis of GANs and gives new insight into their performance. To construct a solution to the distribution-dependent ODE, we first show that the associated nonlinear Fokker-Planck equation has a unique weak solution, using the Crandall-Liggett theorem for differential equations in Banach spaces. From this solution to the Fokker-Planck equation, we construct a unique solution to the ODE, relying on Trevisan's superposition principle. The convergence of the induced gradient flow to the data distribution is obtained by analyzing the Fokker-Planck equation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2023

Generative Adversarial Reduced Order Modelling

In this work, we present GAROM, a new approach for reduced order modelli...
research
06/02/2023

GANs Settle Scores!

Generative adversarial networks (GANs) comprise a generator, trained to ...
research
11/13/2019

Asymptotics of Reinforcement Learning with Neural Networks

We prove that a single-layer neural network trained with the Q-learning ...
research
02/02/2018

No Modes left behind: Capturing the data distribution effectively using GANs

Generative adversarial networks (GANs) while being very versatile in rea...
research
11/04/2020

On the Convergence of Gradient Descent in GANs: MMD GAN As a Gradient Flow

We consider the maximum mean discrepancy (MMD) GAN problem and propose a...
research
08/18/2021

Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation

The training of artificial neural networks (ANNs) with rectified linear ...
research
03/02/2023

A Field-Theoretic Approach to Unlabeled Sensing

We study the recent problem of unlabeled sensing from the information sc...

Please sign up or login with your details

Forgot password? Click here to reset