DeepAI AI Chat
Log In Sign Up

Maximum Mean Discrepancy Gradient Flow

by   Michael Arbel, et al.
King Abdullah University of Science and Technology

We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties. The MMD is an integral probability metric defined for a reproducing kernel Hilbert space (RKHS), and serves as a metric on probability measures for a sufficiently rich RKHS. We obtain conditions for convergence of the gradient flow towards a global optimum, that can be related to particle transport when optimizing neural networks. We also propose a way to regularize this MMD flow, based on an injection of noise in the gradient. This algorithmic fix comes with theoretical and empirical evidence. The practical implementation of the flow is straightforward, since both the MMD and its gradient have simple closed-form expressions, which can be easily estimated with samples.


page 1

page 2

page 3

page 4


KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support

We study the gradient flow for a relaxed approximation to the Kullback-L...

Dynamic Flows on Curved Space Generated by Labeled Data

The scarcity of labeled data is a long-standing challenge for many machi...

A Class of Dimensionality-free Metrics for the Convergence of Empirical Measures

This paper concerns the convergence of empirical measures in high dimens...

Wasserstein Gradient Flows of the Discrepancy with Distance Kernel on the Line

This paper provides results on Wasserstein gradient flows between measur...

Controlling Moments with Kernel Stein Discrepancies

Quantifying the deviation of a probability distribution is challenging w...

Unbalanced Sobolev Descent

We introduce Unbalanced Sobolev Descent (USD), a particle descent algori...

Closed-form Expressions for Maximum Mean Discrepancy with Applications to Wasserstein Auto-Encoders

The Maximum Mean Discrepancy (MMD) has found numerous applications in st...