
Asymptotic Guarantees for Learning Generative Models with the SlicedWasserstein Distance
Minimum expected distance estimation (MEDE) algorithms have been widely ...
06/11/2019 ∙ by Kimia Nadjahi, et al. ∙ 0 ∙ shareread it

Generalized Sliced Wasserstein Distances
The Wasserstein distance and its variations, e.g., the slicedWasserstei...
02/01/2019 ∙ by Soheil Kolouri, et al. ∙ 0 ∙ shareread it

Implicit Manifold Learning on Generative Adversarial Networks
This paper raises an implicit manifold learning perspective in Generativ...
10/30/2017 ∙ by Kry Yik Chau Lui, et al. ∙ 0 ∙ shareread it

On the rate of convergence of empirical measure in ∞Wasserstein distance for unbounded density function
We consider a sequence of identically independently distributed random s...
07/22/2018 ∙ by Anning Liu, et al. ∙ 0 ∙ shareread it

Orthogonal Estimation of Wasserstein Distances
Wasserstein distances are increasingly used in a wide variety of applica...
03/09/2019 ∙ by Mark Rowland, et al. ∙ 6 ∙ shareread it

Clustering Meets Implicit Generative Models
Clustering is a cornerstone of unsupervised learning which can be though...
04/30/2018 ∙ by Francesco Locatello, et al. ∙ 0 ∙ shareread it

Rethinking Generative Coverage: A Pointwise Guaranteed Approac
All generative models have to combat missing modes. The conventional wis...
02/13/2019 ∙ by Peilin Zhong, et al. ∙ 6 ∙ shareread it
Geometrical Insights for Implicit Generative Modeling
Learning algorithms for implicit generative models can optimize a variety of criteria that measure how the data distribution differs from the implicit model distribution, including the Wasserstein distance, the Energy distance, and the Maximum Mean Discrepancy criterion. A careful look at the geometries induced by these distances on the space of probability measures reveals interesting differences. In particular, we can establish surprising approximate global convergence guarantees for the 1Wasserstein distance,even when the parametric generator has a nonconvex parametrization.
READ FULL TEXT