DeepAI AI Chat
Log In Sign Up

Wasserstein Neural Processes

10/01/2019
by   Andrew Carr, et al.
0

Neural Processes (NPs) are a class of models that learn a mapping from a context set of input-output pairs to a distribution over functions. They are traditionally trained using maximum likelihood with a KL divergence regularization term. We show that there are desirable classes of problems where NPs, with this loss, fail to learn any reasonable distribution. We also show that this drawback is solved by using approximations of Wasserstein distance which calculates optimal transport distances even for distributions of disjoint support. We give experimental justification for our method and demonstrate performance. These Wasserstein Neural Processes (WNPs) maintain all of the benefits of traditional NPs while being able to approximate a new class of function mappings.

READ FULL TEXT
10/04/2019

Fused Gromov-Wasserstein Alignment for Hawkes Processes

We propose a novel fused Gromov-Wasserstein alignment method to jointly ...
07/26/2022

Sliced Wasserstein Variational Inference

Variational Inference approximates an unnormalized distribution via the ...
03/17/2022

Low-rank Wasserstein polynomial chaos expansions in the framework of optimal transport

A unsupervised learning approach for the computation of an explicit func...
11/19/2020

Wasserstein Learning of Determinantal Point Processes

Determinantal point processes (DPPs) have received significant attention...
02/15/2018

Stochastic Wasserstein Barycenters

We present a stochastic algorithm to compute the barycenter of a set of ...
03/08/2018

Distributed Computation of Wasserstein Barycenters over Networks

We propose a new class-optimal algorithm for the distributed computation...