DeepAI AI Chat
Log In Sign Up

Wasserstein Learning of Determinantal Point Processes

by   Lucas Anquetil, et al.

Determinantal point processes (DPPs) have received significant attention as an elegant probabilistic model for discrete subset selection. Most prior work on DPP learning focuses on maximum likelihood estimation (MLE). While efficient and scalable, MLE approaches do not leverage any subset similarity information and may fail to recover the true generative distribution of discrete data. In this work, by deriving a differentiable relaxation of a DPP sampling algorithm, we present a novel approach for learning DPPs that minimizes the Wasserstein distance between the model and data composed of observed subsets. Through an evaluation on a real-world dataset, we show that our Wasserstein learning approach provides significantly improved predictive performance on a generative task compared to DPPs trained using MLE.


page 1

page 2

page 3

page 4


Wasserstein Learning of Deep Generative Point Process Models

Point processes are becoming very popular in modeling asynchronous seque...

Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes

Determinantal point processes (DPPs) have attracted significant attentio...

Efficient Bayesian Nonparametric Modelling of Structured Point Processes

This paper presents a Bayesian generative model for dependent Cox point ...

Wasserstein Neural Processes

Neural Processes (NPs) are a class of models that learn a mapping from a...

Learning Nonsymmetric Determinantal Point Processes

Determinantal point processes (DPPs) have attracted substantial attentio...

Sliced-Wasserstein normalizing flows: beyond maximum likelihood training

Despite their advantages, normalizing flows generally suffer from severa...

Hinge-Wasserstein: Mitigating Overconfidence in Regression by Classification

Modern deep neural networks are prone to being overconfident despite the...