DeepAI AI Chat
Log In Sign Up

PHom-GeM: Persistent Homology for Generative Models

by   Jeremy Charlier, et al.
University of Luxembourg

Generative neural network models, including Generative Adversarial Network (GAN) and Auto-Encoders (AE), are among the most popular neural network models to generate adversarial data. The GAN model is composed of a generator that produces synthetic data and of a discriminator that discriminates between the generator's output and the true data. AE consist of an encoder which maps the model distribution to a latent manifold and of a decoder which maps the latent manifold to a reconstructed distribution. However, generative models are known to provoke chaotically scattered reconstructed distribution during their training, and consequently, incomplete generated adversarial distributions. Current distance measures fail to address this problem because they are not able to acknowledge the shape of the data manifold, i.e. its topological features, and the scale at which the manifold should be analyzed. We propose Persistent Homology for Generative Models, PHom-GeM, a new methodology to assess and measure the distribution of a generative model. PHom-GeM minimizes an objective function between the true and the reconstructed distributions and uses persistent homology, the study of the topological features of a space at different spatial resolutions, to compare the nature of the true and the generated distributions. Our experiments underline the potential of persistent homology for Wasserstein GAN in comparison to Wasserstein AE and Variational AE. The experiments are conducted on a real-world data set particularly challenging for traditional distance measures and generative neural network models. PHom-GeM is the first methodology to propose a topological distance measure, the bottleneck distance, for generative models used to compare adversarial samples in the context of credit card transactions.


Generative models with kernel distance in data space

Generative models dealing with modeling a joint data distribution are ge...

Learning Generative Models across Incomparable Spaces

Generative Adversarial Networks have shown remarkable success in learnin...

Mode Penalty Generative Adversarial Network with adapted Auto-encoder

Generative Adversarial Networks (GAN) are trained to generate sample ima...

New Losses for Generative Adversarial Learning

Generative Adversarial Networks (Goodfellow et al., 2014), a major break...

Sliced Wasserstein Generative Models

In generative modeling, the Wasserstein distance (WD) has emerged as a u...

Adversarial Manifold Matching via Deep Metric Learning for Generative Modeling

We propose a manifold matching approach to generative models which inclu...

Towards Diverse Paraphrase Generation Using Multi-Class Wasserstein GAN

Paraphrase generation is an important and challenging natural language p...