DeepAI AI Chat
Log In Sign Up

Learning Generative Models across Incomparable Spaces

by   Charlotte Bunne, et al.

Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. However, in some cases, we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). In this work, we propose an approach to learn generative models across such incomparable spaces, and demonstrate how to steer the learned distribution towards target properties. A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain learning.


page 6

page 8

page 13

page 16


PHom-GeM: Persistent Homology for Generative Models

Generative neural network models, including Generative Adversarial Netwo...

On Need for Topology Awareness of Generative Models

Manifold assumption in learning states that: the data lie approximately ...

DAG-WGAN: Causal Structure Learning With Wasserstein Generative Adversarial Networks

The combinatorial search space presents a significant challenge to learn...

Sliced Wasserstein Generative Models

In generative modeling, the Wasserstein distance (WD) has emerged as a u...

Intrinsic Multi-scale Evaluation of Generative Models

Generative models are often used to sample high-dimensional data points ...

Can Push-forward Generative Models Fit Multimodal Distributions?

Many generative models synthesize data by transforming a standard Gaussi...

Generalization and Memorization: The Bias Potential Model

Models for learning probability distributions such as generative models ...