Learning GANs and Ensembles Using Discrepancy

10/20/2019
by   Ben Adlam, et al.
30

Generative adversarial networks (GANs) generate data based on minimizing a divergence between two distributions. The choice of that divergence is therefore critical. We argue that the divergence must take into account the hypothesis set and the loss function used in a subsequent learning task, where the data generated by a GAN serves for training. Taking that structural information into account is also important to derive generalization guarantees. Thus, we propose to use the discrepancy measure, which was originally introduced for the closely related problem of domain adaptation and which precisely takes into account the hypothesis set and the loss function. We show that discrepancy admits favorable properties for training GANs and prove explicit generalization guarantees. We present efficient algorithms using discrepancy for two tasks: training a GAN directly, namely DGAN, and mixing previously trained generative models, namely EDGAN. Our experiments on toy examples and several benchmark datasets show that DGAN is competitive with other GANs and that EDGAN outperforms existing GAN ensembles, such as AdaGAN.

READ FULL TEXT
research
06/03/2020

Rényi Generative Adversarial Networks

We propose a loss function for generative adversarial networks (GANs) us...
research
06/09/2021

Realizing GANs via a Tunable Loss Function

We introduce a tunable GAN, called α-GAN, parameterized by α∈ (0,∞], whi...
research
06/11/2020

Cumulant GAN

Despite the continuous improvements of Generative Adversarial Networks (...
research
12/03/2016

Ensembles of Generative Adversarial Networks

Ensembles are a popular way to improve results of discriminative CNNs. T...
research
12/24/2018

Improving MMD-GAN Training with Repulsive Loss Function

Generative adversarial nets (GANs) are widely used to learn the data sam...
research
02/15/2018

Selecting the Best in GANs Family: a Post Selection Inference Framework

"Which Generative Adversarial Networks (GANs) generates the most plausib...
research
05/12/2022

α-GAN: Convergence and Estimation Guarantees

We prove a two-way correspondence between the min-max optimization of ge...

Please sign up or login with your details

Forgot password? Click here to reset