DeepAI AI Chat
Log In Sign Up

Sliced Wasserstein Variational Inference

by   Mingxuan Yi, et al.
University of Bristol

Variational Inference approximates an unnormalized distribution via the minimization of Kullback-Leibler (KL) divergence. Although this divergence is efficient for computation and has been widely used in applications, it suffers from some unreasonable properties. For example, it is not a proper metric, i.e., it is non-symmetric and does not preserve the triangle inequality. On the other hand, optimal transport distances recently have shown some advantages over KL divergence. With the help of these advantages, we propose a new variational inference method by minimizing sliced Wasserstein distance, a valid metric arising from optimal transport. This sliced Wasserstein distance can be approximated simply by running MCMC but without solving any optimization problem. Our approximation also does not require a tractable density function of variational distributions so that approximating families can be amortized by generators like neural networks. Furthermore, we provide an analysis of the theoretical properties of our method. Experiments on synthetic and real data are illustrated to show the performance of the proposed method.


page 1

page 2

page 3

page 4


Wasserstein Variational Gradient Descent: From Semi-Discrete Optimal Transport to Ensemble Variational Inference

Particle-based variational inference offers a flexible way of approximat...

Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein Space

Variational inference (VI) seeks to approximate a target distribution π ...

A Distance for HMMs based on Aggregated Wasserstein Metric and State Registration

We propose a framework, named Aggregated Wasserstein, for computing a di...

Aggregated Wasserstein Metric and State Registration for Hidden Markov Models

We propose a framework, named Aggregated Wasserstein, for computing a di...

Wasserstein Neural Processes

Neural Processes (NPs) are a class of models that learn a mapping from a...

Refined α-Divergence Variational Inference via Rejection Sampling

We present an approximate inference method, based on a synergistic combi...

Entropic Gromov-Wasserstein Distances: Stability, Algorithms, and Distributional Limits

The Gromov-Wasserstein (GW) distance quantifies discrepancy between metr...