DeepAI AI Chat
Log In Sign Up

Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions

by   Umut Şimşekli, et al.

By building up on the recent theory that established the connection between implicit generative modeling and optimal transport, in this study, we propose a novel parameter-free algorithm for learning the underlying distributions of complicated datasets and sampling from them. The proposed algorithm is based on a functional optimization problem, which aims at finding a measure that is close to the data distribution as much as possible and also expressive enough for generative modeling purposes. We formulate the problem as a gradient flow in the space of probability measures. The connections between gradient flows and stochastic differential equations let us develop a computationally efficient algorithm for solving the optimization problem, where the resulting algorithm resembles the recent dynamics-based Markov Chain Monte Carlo algorithms. We provide formal theoretical analysis where we prove finite-time error guarantees for the proposed algorithm. Our experimental results support our theory and shows that our algorithm is able to capture the structure of challenging distributions.


Wasserstein Gradients for the Temporal Evolution of Probability Distributions

Many studies have been conducted on flows of probability measures, often...

Optimal transport in multilayer networks

Modeling traffic distribution and extracting optimal flows in multilayer...

From optimal transport to generative modeling: the VEGAN cookbook

We study unsupervised generative modeling in terms of the optimal transp...

Transport, Variational Inference and Diffusions: with Applications to Annealed Flows and Schrödinger Bridges

This paper explores the connections between optimal transport and variat...

Predictable Feature Analysis

Every organism in an environment, whether biological, robotic or virtual...

A theory of learning with constrained weight-distribution

A central question in computational neuroscience is how structure determ...

Taming Hyperparameter Tuning in Continuous Normalizing Flows Using the JKO Scheme

A normalizing flow (NF) is a mapping that transforms a chosen probabilit...