Realizing GANs via a Tunable Loss Function

06/09/2021
by   Gowtham R. Kurri, et al.
0

We introduce a tunable GAN, called α-GAN, parameterized by α∈ (0,∞], which interpolates between various f-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct α-GAN using a supervised loss function, namely, α-loss, which is a tunable loss function capturing several canonical losses. We show that α-GAN is intimately related to the Arimoto divergence, which was first proposed by Österriecher (1996), and later studied by Liese and Vajda (2006). We posit that the holistic understanding that α-GAN introduces will have practical benefits of addressing both the issues of vanishing gradients and mode collapse.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2023

A Unifying Generator Loss Function for Generative Adversarial Networks

A unifying α-parametrized generator loss function is introduced for a du...
research
05/12/2022

α-GAN: Convergence and Estimation Guarantees

We prove a two-way correspondence between the min-max optimization of ge...
research
02/20/2020

A Novel Framework for Selection of GANs for an Application

Generative Adversarial Network (GAN) is a current focal point of researc...
research
10/30/2017

Understanding GANs: the LQG Setting

Generative Adversarial Networks (GANs) have become a popular method to l...
research
10/20/2019

Learning GANs and Ensembles Using Discrepancy

Generative adversarial networks (GANs) generate data based on minimizing...
research
06/15/2020

Reciprocal Adversarial Learning via Characteristic Functions

Generative adversarial nets (GANs) have become a preferred tool for acco...
research
06/25/2020

Empirical Analysis of Overfitting and Mode Drop in GAN Training

We examine two key questions in GAN training, namely overfitting and mod...

Please sign up or login with your details

Forgot password? Click here to reset