Multi-Generator Generative Adversarial Nets

08/08/2017
by   Quan Hoang, et al.
0

We propose a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapse and delivering state-of-the-art results. A minimax formulation is able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators' distributions and the empirical data distribution is minimal, whilst the JSD among generators' distributions is maximal, hence effectively avoiding the mode collapse. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by generators.

READ FULL TEXT

page 9

page 10

page 19

page 20

page 21

page 22

page 23

research
09/12/2017

Dual Discriminator Generative Adversarial Nets

We propose in this paper a novel approach to tackle the problem of mode ...
research
02/14/2020

Top-K Training of GANs: Improving Generators by Making Critics Less Critical

We introduce a simple (one line of code) modification to the Generative ...
research
02/09/2019

Venn GAN: Discovering Commonalities and Particularities of Multiple Distributions

We propose a GAN design which models multiple distributions effectively ...
research
02/13/2019

Rethinking Generative Coverage: A Pointwise Guaranteed Approach

All generative models have to combat missing modes. The conventional wis...
research
02/13/2019

Rethinking Generative Coverage: A Pointwise Guaranteed Approac

All generative models have to combat missing modes. The conventional wis...
research
05/28/2018

Versatile Auxiliary Regressor with Generative Adversarial network (VAR+GAN)

Being able to generate constrained samples is one of the most appealing ...
research
03/14/2021

Claim Verification using a Multi-GAN based Model

This article describes research on claim verification carried out using ...

Please sign up or login with your details

Forgot password? Click here to reset