Learning the Parameters of Determinantal Point Process Kernels

02/20/2014
by   Raja Hafiz Affandi, et al.
0

Determinantal point processes (DPPs) are well-suited for modeling repulsion and have proven useful in many applications where diversity is desired. While DPPs have many appealing properties, such as efficient sampling, learning the parameters of a DPP is still considered a difficult problem due to the non-convex nature of the likelihood function. In this paper, we propose using Bayesian methods to learn the DPP kernel parameters. These methods are applicable in large-scale and continuous DPP settings even when the exact form of the eigendecomposition is unknown. We demonstrate the utility of our DPP learning methods in studying the progression of diabetic neuropathy based on spatial distribution of nerve fibers, and in studying human perception of diversity in images.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2015

Inference for determinantal point processes without spectral knowledge

Determinantal point processes (DPPs) are point process models that natur...
research
10/28/2019

Adaptive Sampling for Stochastic Risk-Averse Learning

We consider the problem of training machine learning models in a risk-av...
research
11/12/2013

Approximate Inference in Continuous Determinantal Point Processes

Determinantal point processes (DPPs) are random point processes well-sui...
research
02/20/2020

Diversity sampling is an implicit regularization for kernel methods

Kernel methods have achieved very good performance on large scale regres...
research
11/04/2014

Expectation-Maximization for Learning Determinantal Point Processes

A determinantal point process (DPP) is a probabilistic model of set dive...
research
03/23/2018

Determinantal Point Processes for Coresets

When one is faced with a dataset too large to be used all at once, an ob...

Please sign up or login with your details

Forgot password? Click here to reset