-
Minibatch optimal transport distances; analysis and applications
Optimal transport distances have become a classic tool to compare probab...
read it
-
A contribution to Optimal Transport on incomparable spaces
Optimal Transport is a theory that allows to define geometrical notions ...
read it
-
CO-Optimal Transport
Optimal transport (OT) is a powerful geometric and probabilistic tool fo...
read it
-
Statistical Optimal Transport via Geodesic Hubs
We propose a new method to estimate Wasserstein distances and optimal tr...
read it
-
Interpolating between Optimal Transport and MMD using Sinkhorn Divergences
Comparing probability distributions is a fundamental problem in data sci...
read it
-
Stochastic Optimization for Regularized Wasserstein Estimators
Optimal transport is a foundational problem in optimization, that allows...
read it
-
Computational Optimal Transport
Optimal Transport (OT) is a mathematical gem at the interface between pr...
read it
Learning with minibatch Wasserstein : asymptotic and gradient properties
Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning. Yet their algorithmic complexity prevents their direct use on large scale datasets. To overcome this challenge, practitioners compute these distances on minibatches i.e. they average the outcome of several smaller optimal transport problems. We propose in this paper an analysis of this practice, which effects are not well understood so far. We notably argue that it is equivalent to an implicit regularization of the original problem, with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with defects such as loss of distance property. Along with this theoretical analysis, we also conduct empirical experiments on gradient flows, GANs or color transfer that highlight the practical interest of this strategy.
READ FULL TEXT
Comments
There are no comments yet.