DeepAI AI Chat
Log In Sign Up

Wasserstein Neural Processes

by   Andrew Carr, et al.

Neural Processes (NPs) are a class of models that learn a mapping from a context set of input-output pairs to a distribution over functions. They are traditionally trained using maximum likelihood with a KL divergence regularization term. We show that there are desirable classes of problems where NPs, with this loss, fail to learn any reasonable distribution. We also show that this drawback is solved by using approximations of Wasserstein distance which calculates optimal transport distances even for distributions of disjoint support. We give experimental justification for our method and demonstrate performance. These Wasserstein Neural Processes (WNPs) maintain all of the benefits of traditional NPs while being able to approximate a new class of function mappings.


Fused Gromov-Wasserstein Alignment for Hawkes Processes

We propose a novel fused Gromov-Wasserstein alignment method to jointly ...

Sliced Wasserstein Variational Inference

Variational Inference approximates an unnormalized distribution via the ...

Low-rank Wasserstein polynomial chaos expansions in the framework of optimal transport

A unsupervised learning approach for the computation of an explicit func...

Wasserstein Learning of Determinantal Point Processes

Determinantal point processes (DPPs) have received significant attention...

Stochastic Wasserstein Barycenters

We present a stochastic algorithm to compute the barycenter of a set of ...

Distributed Computation of Wasserstein Barycenters over Networks

We propose a new class-optimal algorithm for the distributed computation...