Large-Scale Wasserstein Gradient Flows

06/01/2021
by   Petr Mokrov, et al.
0

Wasserstein gradient flows provide a powerful means of understanding and solving many diffusion equations. Specifically, Fokker-Planck equations, which model the diffusion of probability measures, can be understood as gradient descent over entropy functionals in Wasserstein space. This equivalence, introduced by Jordan, Kinderlehrer and Otto, inspired the so-called JKO scheme to approximate these diffusion processes via an implicit discretization of the gradient flow in Wasserstein space. Solving the optimization problem associated to each JKO step, however, presents serious computational challenges. We introduce a scalable method to approximate Wasserstein gradient flows, targeted to machine learning applications. Our approach relies on input-convex neural networks (ICNNs) to discretize the JKO steps, which can be optimized by stochastic gradient descent. Unlike previous work, our method does not require domain discretization or particle simulation. As a result, we can sample from the measure at each time step of the diffusion and compute its probability density. We demonstrate our algorithm's performance by computing diffusions following the Fokker-Planck equation and apply it to unnormalized density sampling as well as nonlinear filtering.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2018

Approximate inference with Wasserstein gradient flows

We present a novel approximate inference method for diffusion processes,...
research
10/21/2021

Sliced-Wasserstein Gradient Flows

Minimizing functionals in the space of probability distributions can be ...
research
05/24/2021

Operator-splitting schemes for degenerate conservative-dissipative systems

The theory of Wasserstein gradient flows in the space of probability mea...
research
09/16/2022

Solving Fredholm Integral Equations of the First Kind via Wasserstein Gradient Flows

Solving Fredholm equations of the first kind is crucial in many areas of...
research
11/02/2022

Wasserstein Steepest Descent Flows of Discrepancies with Riesz Kernels

The aim of this paper is twofold. Based on the geometric Wasserstein tan...
research
01/27/2023

Neural Wasserstein Gradient Flows for Maximum Mean Discrepancies with Riesz Kernels

Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals...
research
06/01/2021

Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks

Gradient flows are a powerful tool for optimizing functionals in general...

Please sign up or login with your details

Forgot password? Click here to reset