α-GAN: Convergence and Estimation Guarantees

05/12/2022
by   Gowtham R. Kurri, et al.
0

We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated f-divergences. We then focus on α-GAN, defined via the α-loss, which interpolates several GANs (Hellinger, vanilla, Total Variation) and corresponds to the minimization of the Arimoto divergence. We show that the Arimoto divergences induced by α-GAN equivalently converge, for all α∈ℝ_>0∪{∞}. However, under restricted learning models and finite samples, we provide estimation bounds which indicate diverse GAN behavior as a function of α. Finally, we present empirical results on a toy dataset that highlight the practical utility of tuning the α hyperparameter.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2021

Realizing GANs via a Tunable Loss Function

We introduce a tunable GAN, called α-GAN, parameterized by α∈ (0,∞], whi...
research
06/11/2020

Cumulant GAN

Despite the continuous improvements of Generative Adversarial Networks (...
research
03/23/2021

Generative Minimization Networks: Training GANs Without Competition

Many applications in machine learning can be framed as minimization prob...
research
10/15/2020

Non-saturating GAN training as divergence minimization

Non-saturating generative adversarial network (GAN) training is widely u...
research
01/13/2018

Which Training Methods for GANs do actually Converge?

Recent work has shown local convergence of GAN training for absolutely c...
research
10/20/2019

Learning GANs and Ensembles Using Discrepancy

Generative adversarial networks (GANs) generate data based on minimizing...
research
11/10/2020

Towards a Better Global Loss Landscape of GANs

Understanding of GAN training is still very limited. One major challenge...

Please sign up or login with your details

Forgot password? Click here to reset