DeepAI AI Chat
Log In Sign Up

The back-and-forth method for Wasserstein gradient flows

by   Matt Jacobs, et al.

We present a method to efficiently compute Wasserstein gradient flows. Our approach is based on a generalization of the back-and-forth method (BFM) introduced by Jacobs and Léger to solve optimal transport problems. We evolve the gradient flow by solving the dual problem to the JKO scheme. In general, the dual problem is much better behaved than the primal problem. This allows us to efficiently run large-scale simulations for a large class of internal energies including singular and non-convex energies.


page 27

page 28

page 29

page 30

page 31

page 32


Variational Wasserstein gradient flow

The gradient flow of a function over the space of probability densities ...

Wasserstein Gradient Flows of the Discrepancy with Distance Kernel on the Line

This paper provides results on Wasserstein gradient flows between measur...

Operator-splitting schemes for degenerate conservative-dissipative systems

The theory of Wasserstein gradient flows in the space of probability mea...

Taming Hyperparameter Tuning in Continuous Normalizing Flows Using the JKO Scheme

A normalizing flow (NF) is a mapping that transforms a chosen probabilit...

Dual perspective method for solving the point in a polygon problem

A novel method has been introduced to solve a point inclusion in a polyg...

Learning normalizing flows from Entropy-Kantorovich potentials

We approach the problem of learning continuous normalizing flows from a ...

A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models

Score matching provides an effective approach to learning flexible unnor...