A determinantal point process (DPP) provides a distribution over configurations of points. The defining characteristic of the DPP is that it is a repulsive point process, which makes it useful for modeling diversity. Recently, DPPs have played an increasingly important role in machine learning and statistics with applications both in the discrete setting—where they are used as a diverse subset selection method[kulesza2010structured, kulesza2011k, gillenwater2012discovering, affandi2012markov, snoek2013determinantal, Affandi:AISTATS2013]— and in the continuous setting for generating point configurations that tend to be spread out[affandi2013approximate, zou2012priors].
Formally, given a space , a specific point configuration , and a positive semi-definite kernel function
, the probability density under a DPP with kernelis given by
where is the matrix with entries for each . This defines a repulsive point process since point configurations that are more spread out according to the metric defined by the kernel have higher densities. To see this, recall that the subdeterminant in Eq. (1
) is proportional to the square of the volume spanned by the kernel vectors associated with the points in.
Building on work of kulesza2010structured, it is intuitive to decompose the kernel as
where can be interpreted as the quality function at point and as the similarity kernel between points and . The ability to bias the quality in certain locations while still maintaining diversity via the similarity kernel offers great modeling flexibility.
One of the remarkable aspects of DPPs is that they offer efficient algorithms for inference, including computing the marginal and conditional probabilities [kulesza2012determinantal], sampling [hough2006determinantal, kulesza2010structured, Affandi:AISTATS2013, affandi2013approximate], and restricting to fixed-sized point configurations (-DPPs)[kulesza2011k]. However, an important component of DPP modeling, learning the DPP kernel parameters, is still considered a difficult, open problem. Even in the discrete setting, DPP kernel learning has been conjectured to be NP-hard [kulesza2012determinantal]. Intuitively, the issue arises from the fact that in seeking to maximize the log-likelihood of Eq. (1), the numerator yields a concave log-determinant term whereas the normalizer contributes a convex term, leading to a non-convex objective. This non-convexity holds even under various simplifying assumptions on the form of .
Attempts to partially learn the kernel have been studied by, for example, learning the parametric form of the quality function for fixed similarity [kulesza2011learning], or learning a weighting on a fixed set of kernel experts [kulesza2011k]. So far, the only attempt to learn the parameters of the similarity kernel has used Nelder-Mead optimization [lavancier2012statistical], which lacks theoretical guarantees about convergence to a stationary point.
In this paper, we consider parametric forms for the quality function and similarity kernel and propose Bayesian methods to learn the DPP kernel parameters
. In addition to capturing posterior uncertainty rather than a single point estimate, these methods can be easily modified to efficiently learn large-scale and continuous DPPs where the eigenstructures are either unknown or are inefficient to compute. In contrast, gradient ascent algorithms for maximum likelihood estimation (MLE) require kernelsthat are differentiable with respect to in the discrete case. In the continuous
case, the eigenvalues must additionally have a known, differentiable functional form, which only occurs in limited scenarios.
In Sec. 2, we review DPPs and their fixed-sized counterpart (-DPPs). We then explore likelihood maximization algorithms for learning DPP and -DPP kernels. After examining the shortcomings of the MLE approach, we propose a set of techniques for Bayesian posterior inference of the kernel parameters in Sec. 3, and explore modifications to accommodate learning large-scale and continuous DPPs. In Sec. LABEL:sec:moments
, we derive a set of DPP moments assuming a known kernel eigenstructure and explore using these moments as a model-checking technique. In low-dimensional settings, we can use a method of moments approach to learn the kernel parameters via numerical techniques. Finally, we test our methods on both simulated and real-world data. Specifically, in Sec.LABEL:sec:applications we use DPP learning to study the progression of diabetic neuropathy based on spatial distribution of nerve fibers and also to study human perception of diversity of images.
2.1 Discrete DPPs/-DPPs
For a discrete base set , a DPP defined by an positive semi-definite kernel matrix is a probability measure on the possible subsets of :
Here, is the submatrix of indexed by the elements in and is the identity matrix [borodin2005eynard].
In many applications, we are instead interested in the probability distribution which gives positive mass only to subsets of a fixed size,. In these cases, we consider fixed-sized DPPs (or -DPPs) with probability distribution on sets of cardinality given by
where are eigenvalues of and is the th elementary symmetric polynomial [kulesza2011k]. Note that can be efficiently computed using recursion [kulesza2012determinantal].
2.2 Continuous DPPs/-DPPs
Consider now the case where is a continuous space. DPPs extend to this case naturally, with now a kernel operator instead of a matrix. Again appealing to Eq. (1), the DPP probability density for point configurations is given by
where are eigenvalues of the operator .
The -DPP also extends to the continuous case with
In contrast to the discrete case, the eigenvalues for continuous DPP kernels are generally unknown; exceptions include a few kernels such as the exponentiated quadratic. However, affandi2013approximate showed that a low-rank approximation to can be used to recover an approximation to a finite truncation of the eigenvalues representing an important part of the eigenspectrum. This enables us to approximate the normalizing constants of both DPPs and -DPPs, and will play a crucial role in our proposed methods of Sec. LABEL:sec:largescale.
3 Learning Parametric DPPs
Assume that we are given a training set consisting of samples , and that we model these data using a DPP/-DPP with parametric kernel
with parameters . We denote the associated kernel matrix for a set by and the full kernel matrix/operator by . Likewise, we denote the kernel eigenvalues by . In this section, we explore various methods for DPP/-DPP learning.
3.1 Learning using Optimization Methods
To learn the parameters of a discrete DPP model, we can maximize the log-likelihood
lavancier2012statistical suggests that the Nelder-Mead simplex algorithm [nelder1965simplex] can be used to maximize . This method is based on evaluating the objective function at the vertices of a simplex, then iteratively shrinking the simplex towards an optimal point. While this method is convenient since it does not require explicit knowledge of derivates of
, it is regarded as a heuristic search method and is known for its failure to necessarily converge to a stationary point[mckinnon1998convergence].
Gradient ascent and stochastic gradient ascent provide more attractive approaches because of their theoretical guarantees, but require knowledge of the gradient of . In the discrete DPP setting, this gradient can be computed straightforwardly, and we provide examples for discrete Gaussian and polynomial kernels in the Supplementary Material. We note, however, that these methods are still susceptible to convergence to local optima due to the non-convex likelihood landscape.
The log likelihood of the -DPP kernel parameter is
which presents an addition complication due to needing a sum over terms in the gradient.
For continuous DPPs/-DPPs, gradient ascent can only be used in cases where the exact eigendecomposition of the kernel operator is known with a differentiable form for the eigenvalues (see Eq. (5)). This restricts the applicability of gradient-based likelihood maximization to a limited set of scenarios, such as a DPP with Gaussian quality function and similarity kernel. Furthermore, for kernel operators with infinite rank (such as the Gaussian), an explicit truncation has to be made, resulting in an approximate gradient of
. Unfortunately, such approximate gradients are not unbiased estimates of the true gradient, so the theory associated with attractive stochastic gradient based approaches does not hold.
3.2 Bayesian Learning for Discrete DPPs
Instead of optimizing the likelihood to get an MLE, here we propose a Bayesian approach to that samples from the posterior distribution over kernel parameters:
for the DPP and, for the -DPP,
) yield a closed form posterior, we resort to approximate techniques based on Markov chain Monte Carlo (MCMC). We highlight two techniques: random-walk Metropolis-Hastings (MH) and slice sampling, although other MCMC methods can be employed without loss of generality.