Semi-Supervised Learning with IPM-based GANs: an Empirical Study

12/07/2017
by   Tom Sercu, et al.
0

We present an empirical investigation of a recent class of Generative Adversarial Networks (GANs) using Integral Probability Metrics (IPM) and their performance for semi-supervised learning. IPM-based GANs like Wasserstein GAN, Fisher GAN and Sobolev GAN have desirable properties in terms of theoretical understanding, training stability, and a meaningful loss. In this work we investigate how the design of the critic (or discriminator) influences the performance in semi-supervised learning. We distill three key take-aways which are important for good SSL performance: (1) the K+1 formulation, (2) avoiding batch normalization in the critic and (3) avoiding gradient penalty constraints on the classification layer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2017

Fisher GAN

Generative Adversarial Networks (GANs) are powerful models for learning ...
research
01/04/2018

Demystifying MMD GANs

We investigate the training and performance of generative adversarial ne...
research
11/14/2017

Sobolev GAN

We propose a new Integral Probability Metric (IPM) between distributions...
research
10/27/2017

A Self-Training Method for Semi-Supervised GANs

Since the creation of Generative Adversarial Networks (GANs), much work ...
research
03/05/2018

Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect

Despite being impactful on a variety of problems and applications, the g...
research
01/08/2018

Attacking Speaker Recognition With Deep Generative Models

In this paper we investigate the ability of generative adversarial netwo...
research
01/26/2018

Classification of sparsely labeled spatio-temporal data through semi-supervised adversarial learning

In recent years, Generative Adversarial Networks (GAN) have emerged as a...

Please sign up or login with your details

Forgot password? Click here to reset