Learning High Dimensional Wasserstein Geodesics

02/05/2021
by   Shu Liu, et al.
19

We propose a new formulation and learning strategy for computing the Wasserstein geodesic between two probability distributions in high dimensions. By applying the method of Lagrange multipliers to the dynamic formulation of the optimal transport (OT) problem, we derive a minimax problem whose saddle point is the Wasserstein geodesic. We then parametrize the functions by deep neural networks and design a sample based bidirectional learning algorithm for training. The trained networks enable sampling from the Wasserstein geodesic. As by-products, the algorithm also computes the Wasserstein distance and OT map between the marginal distributions. We demonstrate the performance of our algorithms through a series of experiments with both synthetic and realistic data.

READ FULL TEXT

page 8

page 15

page 20

page 21

page 22

page 23

page 24

research
09/19/2022

The GenCol algorithm for high-dimensional optimal transport: general formulation and application to barycenters and Wasserstein splines

We extend the recently introduced genetic column generation algorithm fo...
research
07/08/2020

Scalable Computations of Wasserstein Barycenter via Input Convex Neural Networks

Wasserstein Barycenter is a principled approach to represent the weighte...
research
05/04/2021

Sampling From the Wasserstein Barycenter

This work presents an algorithm to sample from the Wasserstein barycente...
research
03/10/2023

Neural Gromov-Wasserstein Optimal Transport

We present a scalable neural method to solve the Gromov-Wasserstein (GW)...
research
09/28/2022

GeONet: a neural operator for learning the Wasserstein geodesic

Optimal transport (OT) offers a versatile framework to compare complex d...
research
12/12/2017

Transportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks

The feature map obtained from the denoising autoencoder (DAE) is investi...
research
10/30/2015

Principal Differences Analysis: Interpretable Characterization of Differences between Distributions

We introduce principal differences analysis (PDA) for analyzing differen...

Please sign up or login with your details

Forgot password? Click here to reset