Non-parametric estimation of Jensen-Shannon Divergence in Generative Adversarial Network training

05/25/2017
by   Mathieu Sinn, et al.
0

Generative Adversarial Networks (GANs) have become a widely popular framework for generative modelling of high-dimensional datasets. However their training is well-known to be difficult. This work presents a rigorous statistical analysis of GANs providing straight-forward explanations for common training pathologies such as vanishing gradients. Furthermore, it proposes a new training objective, Kernel GANs, and demonstrates its practical effectiveness on large-scale real-world data sets. A key element in the analysis is the distinction between training with respect to the (unknown) data distribution, and its empirical counterpart. To overcome issues in GAN training, we pursue the idea of smoothing the Jensen-Shannon Divergence (JSD) by incorporating noise in the input distributions of the discriminator. As we show, this effectively leads to an empirical version of the JSD in which the true and the generator densities are replaced by kernel density estimates, which leads to Kernel GANs.

READ FULL TEXT

page 18

page 19

page 20

research
06/02/2023

GANs Settle Scores!

Generative adversarial networks (GANs) comprise a generator, trained to ...
research
06/19/2020

Online Kernel based Generative Adversarial Networks

One of the major breakthroughs in deep learning over the past five years...
research
10/30/2017

Implicit Manifold Learning on Generative Adversarial Networks

This paper raises an implicit manifold learning perspective in Generativ...
research
12/18/2019

Lower Dimensional Kernels for Video Discriminators

This work presents an analysis of the discriminators used in Generative ...
research
11/24/2020

A Convenient Infinite Dimensional Framework for Generative Adversarial Learning

In recent years, generative adversarial networks (GANs) have demonstrate...
research
11/30/2018

Lipizzaner: A System That Scales Robust Generative Adversarial Network Training

GANs are difficult to train due to convergence pathologies such as mode ...
research
05/25/2023

The Representation Jensen-Shannon Divergence

Statistical divergences quantify the difference between probability dist...

Please sign up or login with your details

Forgot password? Click here to reset