1 Introduction
Subset selection is a core task in many realworld applications. For example, in product recommendation we typically want to choose a small set of products from a large collection; many other examples of subset selection tasks turn up in domains like document summarization
Lin and Bilmes (2012); Kulesza and Taskar (2012), sensor placement Krause et al. (2008); Krause and Guestrin (2005), image search Kulesza and Taskar (2011b); Affandi et al. (2014), and auction revenue maximization Dughmi et al. (2009), to name a few. In these applications, a good subset is often one whose individual items are all highquality, but also all distinct. For instance, recommended products should be popular, but they should also be diverse to increase the chance that a user finds at least one of them interesting. Determinantal point processes (DPPs) offer one way to model this tradeoff; a DPP defines a distribution over all possible subsets of a ground set, and the mass it assigns to any given set is a balanced measure of that set’s quality and diversity.Originally discovered as models of fermions Macchi (1975)
, DPPs have recently been effectively adapted for a variety of machine learning tasks
Affandi et al. (2014); Snoek et al. (2013); Kang (2013); Affandi et al. (2013a); Shah and Ghahramani (2013); Affandi et al. (2013b); Gillenwater et al. (2012a); Zou and Adams (2013); Affandi et al. (2012); Gillenwater et al. (2012b); Kulesza and Taskar (2011a, b, 2010). They offer attractive computational properties, including exact and efficient normalization, marginalization, conditioning, and sampling Hough et al. (2006). These properties arise in part from the fact that a DPP can be compactly parameterized by an positive semidefinite matrix . Unfortunately, though, learning from example subsets by maximizing likelihood is conjectured to be NPhard (Kulesza, 2012, Conjecture 4.1). While gradient ascent can be applied in an attempt to approximately optimize the likelihood objective, we show later that it requires a projection step that often produces degenerate results.For this reason, in most previous work only partial learning of has been attempted. Kulesza and Taskar (2011a) showed that the problem of learning a scalar weight for each row of is a convex optimization problem. This amounts to learning what makes an item highquality, but does not address the issue of what makes two items similar. Kulesza and Taskar (2011b) explored a different direction, learning weights for a linear combination of DPPs with fixed s. This works well in a limited setting, but requires storing a potentially large set of kernel matrices, and the final distribution is no longer a DPP, which means that many attractive computational properties are lost. Affandi et al. (2014) proposed as an alternative that one first assume takes on a particular parametric form, and then sample from the posterior distribution over kernel parameters using Bayesian methods. This overcomes some of the disadvantages of Kulesza and Taskar (2011b)’s ensemble method, but does not allow for learning an unconstrained, nonparametric .
The learning method we propose in this paper differs from those of prior work in that it does not assume fixed values or restrictive parameterizations for , and exploits the eigendecomposition of . Many properties of a DPP can be simply characterized in terms of the eigenvalues and eigenvectors of , and working with this decomposition allows us to develop an expectationmaximization (EM) style optimization algorithm. This algorithm negates the need for the problematic projection step that is required for naive gradient ascent to maintain positive semidefiniteness of . As the experiments show, a projection step can sometimes lead to learning a nearly diagonal , which fails to model the negative interactions between items. These interactions are vital, as they lead to the diversityseeking nature of a DPP. The proposed EM algorithm overcomes this failing, making it more robust to initialization and dataset changes. It is also asymptotically faster than gradient ascent.
2 Background
Formally, a DPP on a ground set of items
is a probability measure on
, the set of all subsets of . For every we have , where is a positive semidefinite (PSD) matrix. The subscript denotes the restriction of to the entries indexed by elements of , and we have . Notice that the restriction to PSD matrices ensures that all principal minors of are nonnegative, so thatas required for a proper probability distribution. The normalization constant for the distribution can be computed explicitly thanks to the fact that
, where is the identity matrix. Intuitively, we can think of a diagonal entry as capturing the quality of item , while an offdiagonal entry measures the similarity between items and .An alternative representation of a DPP is given by the marginal kernel: . The  relationship can also be written in terms of their eigendecompositons. and share the same eigenvectors , and an eigenvalue of corresponds to an eigenvalue of :
(1) 
Clearly, if if PSD then is as well, and the above equations also imply that the eigenvalues of are further restricted to be . is called the marginal kernel because, for any set and for every :
(2) 
We can also write the exact (nonmarginal, normalized) probability of a set in terms of :
(3) 
where is the identity matrix with entry zeroed for items (Kulesza, 2012, Equation 3.69). In what follows we use the based formula for and learn the marginal kernel . This is equivalent to learning , as Equation (1) can be applied to convert from to .
3 Learning algorithms
In our learning setting the input consists of example subsets, , where for all . Our goal is to maximize the likelihood of these example sets. We first describe in Section 3.1 a naive optimization procedure: projected gradient ascent on the entries of the marginal matrix , which will serve as a baseline in our experiments. We then develop an EM method: Section 3.2 changes variables from kernel entries to eigenvalues and eigenvectors (introducing a hidden variable in the process), Section 3.3 applies Jensen’s inequality to lowerbound the objective, and Sections 3.4 and 3.5 outline a coordinate ascent procedure on this lower bound.
3.1 Projected gradient ascent
The loglikelihood maximization problem, based on Equation (3), is:
(4) 
where the first constraint ensures that is PSD and the second puts an upper limit of on its eigenvalues. Let represent this loglikelihood objective. Its partial derivative with respect to is easy to compute by applying a standard matrix derivative rule (Petersen and Pedersen, 2012, Equation 57):
(5) 
Thus, projected gradient ascent Levitin and Polyak (1966) is a viable, simple optimization technique. Algorithm 1 outlines this method, which we refer to as KAscent (KA). The initial supplied as input to the algorithm can be any PSD matrix with eigenvalues . The first part of the projection step, , chooses the closest (in Frobenius norm) PSD matrix to (Henrion and Malick, 2011, Equation 1). The second part, , caps the eigenvalues at . (Notice that only the eigenvalues have to be projected; remains symmetric after the gradient step, so its eigenvectors are already guaranteed to be real.)
Unfortunately, the projection can take us to a poor local optima. To see this, consider the case where the starting kernel is a poor fit to the data. In this case, a large initial step size will probably be accepted; even though such a step will likely result in the truncation of many eigenvalues at , the resulting matrix will still be an improvement over the poor initial . However, with many zero eigenvalues, the new will be neardiagonal, and, unfortunately, Equation (5) dictates that if the current is diagonal, then its gradient is as well. Thus, the KA algorithm cannot easily move to any highly nondiagonal matrix. It is possible that employing more complex stepsize selection mechanisms could alleviate this problem, but the EM algorithm we develop in the next section will negate the need for these entirely.
The EM algorithm we develop also has an advantage in terms of asymptotic runtime. The computational complexity of KA is dominated by the matrix inverses of the derivative, each of which requires operations, and by the eigendecomposition needed for the projection, also . The overall runtime of KA, assuming iterations until convergence and an average of iterations to find a step size, is . As we will show in the following sections, the overall runtime of the EM algorithm is , which can be substantially better than KA’s runtime for .
3.2 Eigendecomposing
Eigendecomposition is key to many core DPP algorithms such as sampling and marginalization. This is because the eigendecomposition provides an alternative view of the DPP as a generative process, which often leads to more efficient algorithms. Specifically, sampling a set can be broken down into a twostep process, the first of which involves generating a hidden variable that codes for a particular set of ’s eigenvectors. We review this process below, then exploit it to develop an EM optimization scheme.
Suppose is an eigendecomposition of . Let denote the submatrix of containing only the columns corresponding to the indices in a set . Consider the corresponding marginal kernel, with all selected eigenvalues set to :
(6) 
Any such kernel whose eigenvalues are all is called an elementary DPP. According to (Hough et al., 2006, Theorem 7), a DPP with marginal kernel is a mixture of all possible elementary DPPs:
(7) 
This perspective leads to an efficient DPP sampling algorithm, where a set is first chosen according to its mixture weight in Equation (7), and then a simple algorithm is used to sample from (Kulesza and Taskar, 2012, Algorithm 1). In this sense, the index set is an intermediate hidden variable in the process for generating a sample .
3.3 Lower bounding the objective
We now introduce an auxiliary distribution, , and deploy it with Jensen’s inequality to lowerbound the likelihood objective. This is a standard technique for developing EM schemes for dealing with hidden variables Neal and Hinton (1998). Proceeding in this direction:
(10) 
The function can be expressed in either of the following two forms:
(11)  
(12) 
where is entropy. Consider optimizing this new objective by coordinate ascent. From Equation (11) it is clear that, holding constant, is concave in . This follows from the concavity of divergence. Holding constant in Equation (12) yields the following function:
(13) 
This expression is concave in , since is concave. However, it is not concave in due to the nonconvex constraint. We describe in Section 3.5 one way to handle this.
To summarize, coordinate ascent on alternates the following “expectation” and “maximization” steps; the first is concave in , and the second is concave in the eigenvalues:
Estep:  (14)  
Mstep:  (15) 
3.4 Estep
The Estep is easily solved by setting , which minimizes the KL divergence. Interestingly, we can show that this distribution is itself a conditional DPP, and hence can be compactly described by an kernel matrix. Thus, to complete the Estep, we simply need to construct this kernel. Lemma 1 (see Appendix A for a proof) gives an explicit formula. Note that ’s probability mass is restricted to sets of a particular size , and hence we call it a DPP. A DPP is a variant of DPP that can also be efficiently sampled from and marginalized, via modifications of the standard DPP algorithms. (See Appendix A and Kulesza and Taskar (2011b) for more on DPPs.)
Lemma 1.
At the completion of the Estep, with is a DPP with (nonmarginal) kernel :
(16) 
(17) 
3.5 Mstep
The Mstep update for the eigenvalues is a closedform expression with no need for projection. Taking the derivative of Equation (13) with respect to , setting it equal to zero, and solving for :
(18) 
The exponentialsized sum here is impractical, but we can eliminate it. Recall from Lemma 1 that is a DPP with kernel . Thus, we can use DPP marginalization algorithms to efficiently compute the sum over . More concretely, let represent the eigenvectors of , with indicating the th element of the th eigenvector. Then the marginals are:
(19) 
which allows us to compute the eigenvalue updates in time , for . (See Appendix B for the derivation of Equation (19) and its computational complexity.) Note that this update is selfnormalizing, so explicit enforcement of the constraint is unnecessary. There is one small caveat: the matrix will be infinite if any is exactly equal to 1 (due to in Equation (17)). In practice, we simply tighten the constraint on to keep it slightly below 1.
Turning now to the Mstep update for the eigenvectors, the derivative of Equation (13) with respect to involves an exponentialsize sum over similar to that of the eigenvalue derivative. However, the terms of the sum in this case depend on as well as on , making it hard to simplify. Yet, for the particular case of the initial gradient, where we have , simplification is possible:
(20) 
where is the matrix and . is a matrix containing the columns of the identity corresponding to items in ; simply serves to map the gradients with respect to into the proper positions in . This formula allows us to compute the eigenvector derivatives in time , where again . (See Appendix C for the derivation of Equation (20) and its computational complexity.)
Equation (20) is only valid for the first gradient step, so in practice we do not bother to fully optimize in each Mstep; we simply take a single gradient step on . Ideally we would repeatedly evaluate the Mstep objective, Equation (13), with various step sizes to find the optimal one. However, the Mstep objective is intractable to evaluate exactly, as it is an expectation with respect to an exponentialsize distribution. In practice, we solve this issue by performing an Estep for each trial step size. That is, we update ’s distribution to match the updated and that define , and then determine if the current step size is good by checking for improvement in the likelihood .
There is also the issue of enforcing the nonconvex constraint . We could project to ensure this constraint, but, as previously discussed for eigenvalues, projection steps often lead to poor local optima. Thankfully, for the particular constraint associated with , more sophisticated update techniques exist: the constraint corresponds to optimization over a Stiefel manifold, so the algorithm from (Edelman et al., 1998, Page 326) can be employed. In practice, we simplify this algorithm by negelecting secondorder information (the Hessian) and using the fact that the in our application is fullrank. With these simplifications, the following multiplicative update is all that is needed:
(21) 
where denotes the matrix exponential and is the step size. Algorithm 2 summarizes the overall EM method. As previously mentioned, assuming iterations until convergence and an average of iterations to find a step size, its overall runtime is . The first term in this complexity comes from the eigenvalue updates, Equation (19), and the eigenvector derivative computation, Equation (20). The second term comes from repeatedly computing the Stiefel manifold update of , Equation (21), during the step size search.
4 Experiments
We test the proposed EM learning method (Algorithm 2) by comparing it to KAscent (KA, Algorithm 1)^{1}^{1}1Code and data for all experiments can be downloaded from https://code.google.com/p/emfordpps. Both methods require a starting marginal kernel . Note that neither EM nor KA can deal well with starting from a kernel with too many zeros. For example, starting from a diagonal kernel, both gradients, Equations (5) and (20), will be diagonal, resulting in no modeling of diversity. Thus, the two initialization options that we explore have nontrivial offdiagonals. The first of these options is relatively naive, while the other incorporates statistics from the data.
For the first initialization type, we use a Wishart distribution with degrees of freedom and an identity covariance matrix to draw . The Wishart distribution is relatively unassuming: in terms of eigenvectors, it spreads its mass uniformly over all unitary matrices James (1964). We make just one simple modification to its output to make it a better fit for practical data: we rescale the resulting matrix by so that the corresponding DPP will place a nontrivial amount of probability mass on small sets. (The Wishart’s mean is , so it tends to overemphasize larger sets unless we rescale.) We then convert to via Equation (1).
For the second initialization type, we employ a form of moment matching. Let
and represent the normalized frequencies of single items and pairs of items in the training data:(22) 
Recalling Equation (2), we attempt to match the first and second order moments by choosing as:
(23) 
To ensure a valid starting kernel, we then project by clipping its eigenvalues at and .
4.1 Baby registry tests
Consider a product recommendation task, where the ground set comprises products that can be added to a particular category (e.g., toys or safety) in a baby registry. A very simple recommendation system might suggest products that are popular with other consumers; however, this does not account for negative interactions: if a consumer has already chosen a carseat, they most likely will not choose an additional carseat, no matter how popular it is with other consumers. DPPs are ideal for capturing such negative interactions. A learned DPP could be used to populate an initial, basic registry, as well as to provide live updates of product recommendations as a consumer builds their registry.
To test our DPP learning algorithms, we collected a dataset consisting of baby registries from Amazon.com, filtering out those listing fewer than or more than products. Amazon characterizes each product in a baby registry as belonging to one of categories, such as “toys” and“safety”. For each registry, we created subregistries by splitting it according to these categories. (A registry with toy items and safety items produces two subregistries.) For each category, we then filtered down to its top most frequent items, and removed any product that did not occur in at least subregistries. We discarded categories with or fewer than remaining (nonempty) subregistries for training. The resulting categories have an average inventory size of products and an average number of subregistries . We used % of the data for training and % for testing. Note that categories such as “carseats” contain more diverse items than just their namesake; for instance, “carseats” also contains items such as seat back kick protectors and rearfacing baby view mirrors. See Appendix D
for more dataset details and for quartile numbers for all of the experiments.
Figure (a)a shows the relative test loglikelihood differences of EM and KA when starting from a Wishart initialization. These numbers are the medians from trials (draws from the Wishart). EM gains an average of %, but has a much greater advantage for some categories than for others. Speculating that EM has more of an advantage when the offdiagonal components of are truly important—when products exhibit strong negative interactions—we created a matrix for each category with the true data marginals from Equation (22) as its entries. We then checked the value of . This value correlates well with the relative gains for EM: the categories for which EM has the largest gains (safety, furniture, carseats, and strollers) all exhibit , while categories such as feeding and gear have . Investigating further, we found that, as foreshadowed in Section 3.1, KA performs particularly poorly in the high setting because of its projection step—projection can result in KA learning a neardiagonal matrix.
If instead of the Wishart initialization we use the momentsmatching initializer, this alleviates KA’s projection problem, as it provides a starting point closer to the true kernel. With this initializer, KA and EM have comparable test loglikelihoods (average EM gain of 0.4%). However, the momentsmatching initializer is not a perfect fix for the KA algorithm in all settings. For instance, consider a datapoor setting, where for each category we have only training examples. In this case, even with the momentsmatching initializer EM has a significant edge over KA, as shown in Figure (b)b: EM gains an average of %, with a maximum gain of % for the safety category.
To give a concrete example of the advantages of EM training, Figure (a)a shows a greedy approximation (Nemhauser et al., 1978, Section 4) to the mostlikely tenitem registry in the category “safety”, according to a Wishartinitialized EM model. The corresponding KA selection differs from Figure (a)a in that it replaces the lens filters and the head support with two additional baby monitors: “Motorola MBP36 Remote Wireless Video Baby Monitor”, and “Summer Infant Baby Touch Digital Color Video Monitor”. It seems unlikely that many consumers would select three different brands of video monitor.
Having established that EM is more robust than KA, we conclude with an analysis of runtimes. Figure (b)b shows the ratio of KA’s runtime to EM’s for each category. As discussed earlier, EM is asymptotically faster than KA, and we see this borne out in practice even for the moderate values of and that occur in our registries dataset: on average, EM is times faster than KA.
5 Conclusion
We have explored learning DPPs in a setting where the kernel is not assumed to have fixed values or a restrictive parametric form. By exploiting ’s eigendecomposition, we were able to develop a novel EM learning algorithm. On a product recommendation task, we have shown EM to be faster and more robust than the naive approach of maximizing likelihood by projected gradient. In other applications for which modeling negative interactions between items is important, we anticipate that EM will similarly have a significant advantage.
Acknowledgments
This work was supported in part by ONR Grant N000141010746.
Appendix A Proof of Lemma 1
Lemma 1 gives the exact form of ’s kernel. Before giving the proof, we briefly note that differs slightly from the typical DPPs we have seen thus far, in its conditional nature. More precisely, for a set of size , qualifies as a DPP, a DPP conditioned on sets of size . Formally, a DPP with (nonmarginal) kernel assigns probability for , and probability zero for . As for regular DPPs, a DPP can be efficiently sampled from and marginalized, via modifications of the standard DPP algorithms. For example, the normalization constant for a DPP is given by the identity , where represents the thorder elementary symmetric polynomial on the eigenvalues of Kulesza and Taskar (2011b). Baker and Harwell (1996)’s “summation algorithm” computes in time. In short DPPs enjoy many of the advantages of DPPs. Their identical parameterization, in terms of a single kernel, makes our Estep simple, and their normalization and marginalization properties are useful for the Mstep updates.
Proof.
Since the Estep is an unconstrained KL divergence minimization, we have:
(24) 
where the proportionality follows because is held constant in the conditional distribution. Recalling Equation (9), notice that can be reexpressed as follows:
(25) 
This follows from the identity , for any fullrank square matrix . The subsequent swapping of and , once is rewritten as , does not change the indexed submatrix.
Plugging this back into Equation (24):
(26) 
where represents an elementary DPP, just as in Equation (6), but over rather than . Multiplying this expression by a term that is constant for all maintains proportionality and allows us to simplify the the term. Taking the definition of from Equation (9):
(27)  
(28) 
Having eliminated all dependence on , it is now possible to express as the principal minor of a PSD kernel matrix (see in the statement of the lemma). Thus, is a DPP. ∎
Appendix B MStep eigenvalue updates
We can exploit standard DPP marginalization formulas to efficiently compute the eigenvalue updates for EM. Specifically, the exponentialsize sum over from Equation (18) can be reduced to the computation of an eigendecomposition and several elementary symmetric polynomials on the resulting eigenvalues. Let be the order elementary symmetric polynomial over all eigenvalues of except for the th one. Then, by direct application of (Kulesza and Taskar, 2012, Equation 205), ’s singleton marginals are:
(29) 
As previously noted, elementary symmetric polynomials can be efficiently computed using Baker and Harwell (1996)’s “summation algorithm”.
We can further reduce the complexity of this formula by noting that rank of the matrix is at most . Because only has nonzero eigenvalues, it is the case that, for all :
(30) 
Recalling that the eigenvectors and eigenvalues of are denoted , the computation of the singleton marginals of that are necessary for the Mstep eigenvalue updates can be written as follows:
(31) 
This simplified formula is dominated by the cost of the eigendecompositon required to find . This cost can be further reduced, to , by eigendecomposing a related matrix instead of . Specifically, consider the matrix . Let and be the eigenvectors and eigenvalues of . This is identical to the nonzero eigenvalues of , , and its eigenvectors are related as follows:
(32) 
Getting via Equation (32) is an operation, given the eigendecomposition of . Since this eigendecomposition is an operation, it is dominated by the . To compute Equation (31) for all and requires only time, given . Thus, letting , the size of the largest example set, the overall complexity of the eigenvalue updates is .
Appendix C MStep eigenvector gradient
Recall that the Mstep objective is:
(33) 
The term does not depend on the eigenvectors, so we only have to be concerned with the term when computing the eigenvector derivatives. Recall that this term is defined as follows:
(34) 
Applying standard matrix derivative rules such as (Petersen and Pedersen, 2012, Equation 55), the gradient of the Mstep objective with respect to entry of is:
(35) 
where and the subscript indicates the index of in . The indicates the corresponding row in , and is eigenvector restricted to . Based on this, we can more simply express the derivative with respect to the entire matrix:
(36) 
where the is equal to with the rows and the columns zeroed. Similarly, the other half of the expression represents sorted such that and expanded with zero rows for all and zero columns for all . The exponentialsize sum over could be approximated by drawing a sufficient number of samples from , but in practice that proves somewhat slow. It turns out that it is possible, by exploiting the relationship between and , to perform the first gradient step on without needing to sample .
c.1 Exact computation of the first gradient
Recall that is defined to be , where . The portion of the Mstep objective, Equation (34), can be rewritten in terms of :
(37) 
Taking the gradient of the Mstep objective with respect to :
(38) 
Plugging in the DPP form of derived in the main body of the paper:
(39) 
Recall from the background section the identity used to normalize a DPP, and consider taking its derivative with respect to :
(40) 
Note that this relationship is only true at the start of the Mstep, before (and hence ) undergoes any gradient updates; a gradient step for would mean that , which remains fixed during the Mstep, could no longer can be expressed as . Thus, the formula we develop in this section is only valid for the first gradient step.
Multiplying this by the derivative of with respect to and summing over gives the final form of the gradient with respect to . Thus, we can compute the value of the first gradient on exactly in polynomial time.
c.2 Faster computation of the first gradient
Recall from Appendix B that the rank of the matrix is at most and that its nonzero eigenvalues are identical to those of the matrix . Since the elementary symmetric polynomial depends only on the eigenvalues of its argument, this means can substitute for in Equation (41), if we change variables back from to :
(42) 
where the th term in the sum is assumed to index into the rows of the derivative. Further, because is only size :
(43) 
Plugging this back into Equation (42) and applying standard matrix derivative rules:
(44) 
Thus, the initial Mstep derivative with respect to can be more efficiently computed via the above equation. Specifically, the matrix can be computed in time , since is a diagonal matrix. It can be inverted in time , which is dominated by . Thus, letting , the size of the largest example set, the overall complexity of computing the eigenvector gradient in Equation (44) is .
Appendix D Baby registry details
Figure (a)a and Figure (b)b contain details, referred to in the main body of the paper paper, about the baby registry dataset and the learning methods’ performance on it.


References
 Kulesza [2012] A. Kulesza. Learning with Determinantal Point Processes. PhD thesis, University of Pennsylvania, 2012.

Kulesza and Taskar [2011a]
A. Kulesza and B. Taskar.
Learning
Determinantal Point Processes.
In
Conference on Uncertainty in Artificial Intelligence (UAI)
, 2011a.  Kulesza and Taskar [2011b] A. Kulesza and B. Taskar. kDPPs: FixedSize Determinantal Point Processes. In International Conference on Machine Learning (ICML), 2011b.
 Lin and Bilmes [2012] H. Lin and J. Bilmes. Learning Mixtures of Submodular Shells with Application to Document Summarization. In Conference on Uncertainty in Artificial Intelligence (UAI), 2012.
 Kulesza and Taskar [2012] A. Kulesza and B. Taskar. Determinantal Point Processes for Machine Learning. Foundations and Trends in Machine Learning, 5(23), 2012.
 Krause et al. [2008] A. Krause, A. Singh, and C. Guestrin. NearOptimal Sensor Placements in Gaussian Processes: Theory, Efficient Algorithms, and Empirical Studies. Journal of Machine Learning Research (JMLR), 9:235–284, 2008.
 Krause and Guestrin [2005] A. Krause and C. Guestrin. NearOptimal NonMyopic Value of Information in Graphical Models. In Conference on Uncertainty in Artificial Intelligence (UAI), 2005.
 Affandi et al. [2014] R. Affandi, E. Fox, R. Adams, and B. Taskar. Learning the Parameters of Determinantal Point Process Kernels. In International Conference on Machine Learning (ICML), 2014.
 Dughmi et al. [2009] S. Dughmi, T. Roughgarden, and M. Sundararajan. Revenue Submodularity. In Electronic Commerce, 2009.
 Macchi [1975] O. Macchi. The Coincidence Approach to Stochastic Point Processes. Advances in Applied Probability, 7(1), 1975.
 Snoek et al. [2013] J. Snoek, R. Zemel, and R. Adams. A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data. In Neural Information Processing Systems (NIPS), 2013.
 Kang [2013] B. Kang. Fast Determinantal Point Process Sampling with Application to Clustering. In Neural Information Processing Systems (NIPS), 2013.
 Affandi et al. [2013a] R. Affandi, E. Fox, and B. Taskar. Approximate Inference in Continuous Determinantal Point Processes. In Neural Information Processing Systems (NIPS), 2013a.
 Shah and Ghahramani [2013] A. Shah and Z. Ghahramani. Determinantal Clustering Process — A Nonparametric Bayesian Approach to Kernel Based SemiSupervised Clustering. In Conference on Uncertainty in Artificial Intelligence (UAI), 2013.
 Affandi et al. [2013b] R. Affandi, A. Kulesza, E. Fox, and B. Taskar. Nyström Approximation for LargeScale Determinantal Processes. In Conference on Artificial Intelligence and Statistics (AIStats), 2013b.
 Gillenwater et al. [2012a] J. Gillenwater, A. Kulesza, and B. Taskar. NearOptimal MAP Inference for Determinantal Point Processes. In Neural Information Processing Systems (NIPS), 2012a.
 Zou and Adams [2013] J. Zou and R. Adams. Priors for Diversity in Generative Latent Variable Models. In Neural Information Processing Systems (NIPS), 2013.
 Affandi et al. [2012] R. Affandi, A. Kulesza, and E. Fox. Markov Determinantal Point Processes. In Conference on Uncertainty in Artificial Intelligence (UAI), 2012.

Gillenwater et al. [2012b]
J. Gillenwater, A. Kulesza, and B. Taskar.
Discovering
Diverse and Salient Threads in Document Collections.
In
Empirical Methods in Natural Language Processing (EMNLP)
, 2012b.  Kulesza and Taskar [2010] A. Kulesza and B. Taskar. Structured Determinantal Point Processes. In Neural Information Processing Systems (NIPS), 2010.
 Hough et al. [2006] J. Hough, M. Krishnapur, Y. Peres, and B. Virág. Determinantal Processes and Independence. Probability Surveys, 3, 2006.
 Petersen and Pedersen [2012] K. Petersen and M. Pedersen. The Matrix Cookbook. Technical report, University of Denmark, 2012.
 Levitin and Polyak [1966] E. Levitin and B. Polyak. Constrained Minimization Methods. USSR Computational Mathematics and Mathematical Physics, 6(5):1–50, 1966.
 Henrion and Malick [2011] D. Henrion and J. Malick. Projection Methods for Conic Feasibility Problems. Optimization Methods and Software, 26(1):23–46, 2011.
 Neal and Hinton [1998] R. Neal and G. Hinton. A New View of the EM Algorithm that Justiﬁes Incremental, Sparse and Other Variants. Learning in Graphical Models, 1998.
 Edelman et al. [1998] A. Edelman, T. Arias, and S. Smith. The Geometry of Algorithms with Orthogonality Constraints. SIAM Journal on Matrix Analysis and Applications (SIMAX), 1998.
 James [1964] A. James. Distributions of Matrix Variates and Latent Roots Derived from Normal Samples. Annals of Mathematical Statistics, 35(2):475–501, 1964.
 Nemhauser et al. [1978] G. Nemhauser, L. Wolsey, and M. Fisher. An Analysis of Approximations for Maximizing Submodular Set Functions I. Mathematical Programming, 14(1), 1978.
Comments
There are no comments yet.