PHom-GeM: Persistent Homology for Generative Models

05/23/2019
by   Jeremy Charlier, et al.
0

Generative neural network models, including Generative Adversarial Network (GAN) and Auto-Encoders (AE), are among the most popular neural network models to generate adversarial data. The GAN model is composed of a generator that produces synthetic data and of a discriminator that discriminates between the generator's output and the true data. AE consist of an encoder which maps the model distribution to a latent manifold and of a decoder which maps the latent manifold to a reconstructed distribution. However, generative models are known to provoke chaotically scattered reconstructed distribution during their training, and consequently, incomplete generated adversarial distributions. Current distance measures fail to address this problem because they are not able to acknowledge the shape of the data manifold, i.e. its topological features, and the scale at which the manifold should be analyzed. We propose Persistent Homology for Generative Models, PHom-GeM, a new methodology to assess and measure the distribution of a generative model. PHom-GeM minimizes an objective function between the true and the reconstructed distributions and uses persistent homology, the study of the topological features of a space at different spatial resolutions, to compare the nature of the true and the generated distributions. Our experiments underline the potential of persistent homology for Wasserstein GAN in comparison to Wasserstein AE and Variational AE. The experiments are conducted on a real-world data set particularly challenging for traditional distance measures and generative neural network models. PHom-GeM is the first methodology to propose a topological distance measure, the bottleneck distance, for generative models used to compare adversarial samples in the context of credit card transactions.

READ FULL TEXT
research
09/15/2020

Generative models with kernel distance in data space

Generative models dealing with modeling a joint data distribution are ge...
research
05/14/2019

Learning Generative Models across Incomparable Spaces

Generative Adversarial Networks have shown remarkable success in learnin...
research
11/16/2020

Mode Penalty Generative Adversarial Network with adapted Auto-encoder

Generative Adversarial Networks (GAN) are trained to generate sample ima...
research
07/03/2018

New Losses for Generative Adversarial Learning

Generative Adversarial Networks (Goodfellow et al., 2014), a major break...
research
04/10/2019

Sliced Wasserstein Generative Models

In generative modeling, the Wasserstein distance (WD) has emerged as a u...
research
06/20/2021

Adversarial Manifold Matching via Deep Metric Learning for Generative Modeling

We propose a manifold matching approach to generative models which inclu...
research
09/30/2019

Towards Diverse Paraphrase Generation Using Multi-Class Wasserstein GAN

Paraphrase generation is an important and challenging natural language p...

Please sign up or login with your details

Forgot password? Click here to reset