DeepAI AI Chat
Log In Sign Up

Learning disconnected manifolds: a no GANs land

by   Ugo Tanielian, et al.

Typical architectures of Generative AdversarialNetworks make use of a unimodal latent distribution transformed by a continuous generator. Consequently, the modeled distribution always has connected support which is cumbersome when learning a disconnected set of manifolds. We formalize this problem by establishing a no free lunch theorem for the disconnected manifold learning stating an upper bound on the precision of the targeted distribution. This is done by building on the necessary existence of a low-quality region where the generator continuously samples data between two disconnected modes. Finally, we derive a rejection sampling method based on the norm of generators Jacobian and show its efficiency on several generators including BigGAN.


page 5

page 7

page 8

page 16

page 17

page 19

page 20

page 21


Disconnected Manifold Learning for Generative Adversarial Networks

Real images often lie on a union of disjoint manifolds rather than one g...

Optimal precision for GANs

When learning disconnected distributions, Generative adversarial network...

MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction

Pedestrian trajectory prediction is challenging due to its uncertain and...

Continuous Flattening of All Polyhedral Manifolds using Countably Infinite Creases

We prove that any finite polyhedral manifold in 3D can be continuously f...

A scaled Bregman theorem with applications

Bregman divergences play a central role in the design and analysis of a ...

Latent reweighting, an almost free improvement for GANs

Standard formulations of GANs, where a continuous function deforms a con...

Selective Sampling and Mixture Models in Generative Adversarial Networks

In this paper, we propose a multi-generator extension to the adversarial...