The Benefits of Pairwise Discriminators for Adversarial Training

02/20/2020
by   Shangyuan Tong, et al.
0

Adversarial training methods typically align distributions by solving two-player games. However, in most current formulations, even if the generator aligns perfectly with data, a sub-optimal discriminator can still drive the two apart. Absent additional regularization, the instability can manifest itself as a never-ending game. In this paper, we introduce a family of objectives by leveraging pairwise discriminators, and show that only the generator needs to converge. The alignment, if achieved, would be preserved with any discriminator. We provide sufficient conditions for local convergence; characterize the capacity balance that should guide the discriminator and generator choices; and construct examples of minimally sufficient discriminators. Empirically, we illustrate the theory and the effectiveness of our approach on synthetic examples. Moreover, we show that practical methods derived from our approach can better generate higher-resolution images.

READ FULL TEXT

page 20

page 21

page 22

research
08/19/2020

Direct Adversarial Training for GANs

There is an interesting discovery that several neural networks are vulne...
research
03/10/2020

KALE: When Energy-Based Learning Meets Adversarial Training

Legendre duality provides a variational lower-bound for the Kullback-Lei...
research
12/20/2019

Bridging adversarial samples and adversarial networks

Generative adversarial networks have achieved remarkable performance on ...
research
02/21/2019

Domain Partitioning Network

Standard adversarial training involves two agents, namely a generator an...
research
07/13/2020

An Adversarial Approach to Structural Estimation

We propose a new simulation-based estimation method, adversarial estimat...
research
01/31/2018

Synchronization Detection and Recovery of Steganographic Messages with Adversarial Learning

As a means for secret communication, steganography aims at concealing a ...
research
07/14/2017

f-GANs in an Information Geometric Nutshell

Nowozin et al showed last year how to extend the GAN principle to all f-...

Please sign up or login with your details

Forgot password? Click here to reset