Partition-Guided GANs

04/02/2021
by   Mohammadreza Armandpour, et al.
0

Despite the success of Generative Adversarial Networks (GANs), their training suffers from several well-known problems, including mode collapse and difficulties learning a disconnected set of manifolds. In this paper, we break down the challenging task of learning complex high dimensional distributions, supporting diverse data samples, to simpler sub-tasks. Our solution relies on designing a partitioner that breaks the space into smaller regions, each having a simpler distribution, and training a different generator for each partition. This is done in an unsupervised manner without requiring any labels. We formulate two desired criteria for the space partitioner that aid the training of our mixture of generators: 1) to produce connected partitions and 2) provide a proxy of distance between partitions and data samples, along with a direction for reducing that distance. These criteria are developed to avoid producing samples from places with non-existent data density, and also facilitate training by providing additional direction to the generators. We develop theoretical constraints for a space partitioner to satisfy the above criteria. Guided by our theoretical analysis, we design an effective neural architecture for the space partitioner that empirically assures these conditions. Experimental results on various standard benchmarks show that the proposed unsupervised model outperforms several recent methods.

READ FULL TEXT

page 1

page 15

page 16

page 19

research
07/13/2020

Lessons Learned from the Training of GANs on Artificial Datasets

Generative Adversarial Networks (GANs) have made great progress in synth...
research
06/03/2018

Disconnected Manifold Learning for Generative Adversarial Networks

Real images often lie on a union of disjoint manifolds rather than one g...
research
07/19/2023

Adversarial Likelihood Estimation with One-way Flows

Generative Adversarial Networks (GANs) can produce high-quality samples,...
research
12/12/2017

PacGAN: The power of two samples in generative adversarial networks

Generative adversarial networks (GANs) are innovative techniques for lea...
research
01/31/2021

Demonstrating the Evolution of GANs through t-SNE

Generative Adversarial Networks (GANs) are powerful generative models th...
research
11/24/2020

A Convenient Infinite Dimensional Framework for Generative Adversarial Learning

In recent years, generative adversarial networks (GANs) have demonstrate...
research
02/13/2019

Rethinking Generative Coverage: A Pointwise Guaranteed Approac

All generative models have to combat missing modes. The conventional wis...

Please sign up or login with your details

Forgot password? Click here to reset