KGAN: How to Break The Minimax Game in GAN

11/06/2017
by   Trung Le, et al.
0

Generative Adversarial Networks (GANs) were intuitively and attractively explained under the perspective of game theory, wherein two involving parties are a discriminator and a generator. In this game, the task of the discriminator is to discriminate the real and generated (i.e., fake) data, whilst the task of the generator is to generate the fake data that maximally confuses the discriminator. In this paper, we propose a new viewpoint for GANs, which is termed as the minimizing general loss viewpoint. This viewpoint shows a connection between the general loss of a classification problem regarding a convex loss function and a f-divergence between the true and fake data distributions. Mathematically, we proposed a setting for the classification problem of the true and fake data, wherein we can prove that the general loss of this classification problem is exactly the negative f-divergence for a certain convex function f. This allows us to interpret the problem of learning the generator for dismissing the f-divergence between the true and fake data distributions as that of maximizing the general loss which is equivalent to the min-max problem in GAN if the Logistic loss is used in the classification problem. However, this viewpoint strengthens GANs in two ways. First, it allows us to employ any convex loss function for the discriminator. Second, it suggests that rather than limiting ourselves in NN-based discriminators, we can alternatively utilize other powerful families. Bearing this viewpoint, we then propose using the kernel-based family for discriminators. This family has two appealing features: i) a powerful capacity in classifying non-linear nature data and ii) being convex in the feature space. Using the convexity of this family, we can further develop Fenchel duality to equivalently transform the max-min problem to the max-max dual problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2023

A Unifying Generator Loss Function for Generative Adversarial Networks

A unifying α-parametrized generator loss function is introduced for a du...
research
09/06/2018

GANs beyond divergence minimization

Generative adversarial networks (GANs) can be interpreted as an adversar...
research
10/28/2018

A Convex Duality Framework for GANs

Generative adversarial network (GAN) is a minimax game between a generat...
research
06/11/2020

Cumulant GAN

Despite the continuous improvements of Generative Adversarial Networks (...
research
08/21/2022

Instability and Local Minima in GAN Training with Kernel Discriminators

Generative Adversarial Networks (GANs) are a widely-used tool for genera...
research
09/05/2017

Linking Generative Adversarial Learning and Binary Classification

In this note, we point out a basic link between generative adversarial (...
research
02/28/2023

Towards Addressing GAN Training Instabilities: Dual-objective GANs with Tunable Parameters

In an effort to address the training instabilities of GANs, we introduce...

Please sign up or login with your details

Forgot password? Click here to reset