# Bayesian experimental design using regularized determinantal point processes

In experimental design, we are given n vectors in d dimensions, and our goal is to select k≪ n of them to perform expensive measurements, e.g., to obtain labels/responses, for a linear regression task. Many statistical criteria have been proposed for choosing the optimal design, with popular choices including A- and D-optimality. If prior knowledge is given, typically in the form of a d× d precision matrix A, then all of the criteria can be extended to incorporate that information via a Bayesian framework. In this paper, we demonstrate a new fundamental connection between Bayesian experimental design and determinantal point processes, the latter being widely used for sampling diverse subsets of data. We use this connection to develop new efficient algorithms for finding (1+ϵ)-approximations of optimal designs under four optimality criteria: A, C, D and V. Our algorithms can achieve this when the desired subset size k is Ω(d_ A/ϵ + 1/ϵ/ϵ^2), where d_ A≤ d is the A-effective dimension, which can often be much smaller than d. Our results offer direct improvements over a number of prior works, for both Bayesian and classical experimental design, in terms of algorithm efficiency, approximation quality, and range of applicable criteria.

## Authors

• 16 publications
• 4 publications
• 103 publications
08/02/2018

### Removal of the points that do not support an E-optimal experimental design

We propose a method of removal of design points that cannot support any ...
02/25/2021

### Efficient computational algorithms for approximate optimal designs

In this paper, we propose two simple yet efficient computational algorit...
11/14/2017

### Near-Optimal Discrete Optimization for Experimental Design: A Regret Minimization Approach

The experimental design problem concerns the selection of k points from ...
01/14/2019

### Optimality Criteria for Probabilistic Numerical Methods

It is well understood that Bayesian decision theory and average case ana...
05/18/2019

### On greedy heuristics for computing D-efficient saturated subsets

Let F be a set consisting of n real vectors of dimension m ≤ n. For any ...
10/23/2014

### Attribute Efficient Linear Regression with Data-Dependent Sampling

In this paper we analyze a budgeted learning setting, in which the learn...
12/16/2019

### Robust Adaptive Least Squares Polynomial Chaos Expansions in High-Frequency Applications

We present an algorithm for computing sparse, least squares-based polyno...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Consider a collection of experiments parameterized by -dimensional vectors , and let denote the matrix with rows . The outcome of the

th experiment is a random variable

, where is the parameter vector of a linear model with prior distribution , and is independent noise. In experimental design, we have access to the vectors , for , but we are allowed to observe only a small number of outcomes for experiments we choose. Suppose that we observe the outcomes from a subset of experiments. The posterior distribution of given (the vector of outcomes in ) is:

 w∣yS ∼ N( (X⊤SXS+A)−1X⊤SyS,  σ2(X⊤SXS+A)−1 ),

where denotes the matrix with rows for . In the Bayesian framework of experimental design [CV95], we assume that the prior precision matrix of the linear model is known, and our goal is to choose so as to minimize some quantity (a.k.a. an optimality criterion) measuring the “size” of the posterior covariance matrix . This quantity is a function of the subset covariance . Note that if matrix

is non-invertible then, even though the prior distribution is ill-defined, we can still interpret it as having no prior information in the directions with eigenvalue 0. In particular, for

we recover classical experimental design, where the covariance matrix of given is . We will write the Bayesian optimality criteria as functions , where corresponds to the subset covariance . The following standard criteria [CV95, Puk06] are of primary interest to us:

1. A-optimality: ;

2. C-optimality:  for some vector ;

3. D-optimality: ;

4. V-optimality: .

Other popular criteria (less relevant to our discussion) include E-optimality, (here, denotes the spectral norm) and G-optimality, .

The general task we consider is given as follows, where denotes :

Bayesian experimental design. Given an matrix , a criterion and ,

 efficiently minimize fA(X⊤SXS)over  S⊆[n]  s.t.  |S|=k.

Optimal value. Given , and , we denote the optimum as .

The prior work around this problem can be grouped into two research questions. The first question asks what can we infer about just from the spectral information about the problem, which is contained in the data covariance matrix . The second question asks when does there exist a polynomial time algorithm for finding a )-approximation for .

Question 1:  Given only , and , what is the upper bound on ?

Question 2:  Given , and , can we efficiently find a -approximation for ?

A key aspect of both of these questions is how large the subset size has to be for us to provide useful answers. As a baseline, we should expect meaningful results when is at least (see discussion in [AZLSW17]), and in fact, for classical experimental design (i.e., when ), the problem becomes ill-defined when . In the Bayesian setting we should be able to exploit the additional prior knowledge to achieve strong results even for . Intuitively, the larger the prior precision matrix

, the fewer degrees of freedom we have in the problem. To measure this, we use the statistical notion of

effective dimension [AM15].

###### Definition 1

For psd matrices and , let the -effective dimension of be defined as . We will use the shorthand when referring to .

Recently, [DW18b] obtained bounds on Bayesian A/V-optimality criteria for , suggesting that is the right notion of degrees of freedom for this problem. We argue that

can in fact be far too large of an estimate because it does not take into account the size

when computing the effective dimension. Intuitively, since is computed using the full data covariance , it is not in the appropriate scale with respect to the smaller covariance . One way to correct this is to increase the regularization on from to and use as the degrees of freedom. Another way is to rescale the full covariance to and use as the degrees of freedom. In fact, since , these two approaches are identical. Note that and this gap can be very large for some problems (see discussion in Appendix B). The following result supports the above reasoning by showing that for any such that , there is of size which satisfies . This not only improves on [DW18b] in terms of the supported range of sizes , but also in terms of the obtained bound (see Section 2 for a comparison).

###### Theorem 1

Let be A/C/D/V-optimality and be . For any such that ,

 \textscOPTk≤( 1+8dnkAk+8√\definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1∣∣ln(k/dnkA)k )⋅fA(knΣX).
###### Remark 2

We give an time algorithm for finding subset that certifies this bound.

To establish Theorem 1, we propose a new sampling distribution , where is a vector of weights. This is a special regularized variant of a determinantal point process (DPP), which is a well-studied family of distributions [KT12] with numerous applications in sampling diverse subsets of elements. Given a psd matrix and a weight vector , we define as a distribution over subsets (of all sizes) such that:

 (see Def.~{}???)Pr(S)∝det(X⊤SXS+A) ⋅∏i∈Spi⋅∏i∉S(1−pi).

A number of regularized DPPs have been proposed recently [Der19, DW18b], mostly within the context of Randomized Numerical Linear Algebra (RandNLA) [Mah11, DM16, DM17]. To our knowledge, ours is the first such definition that strictly falls under the traditional definition of a DPP [KT12]. We show this in Section 3, where we also prove that regularized DPPs can be decomposed into a low-rank DPP plus i.i.d. Bernoulli sampling (Theorem 5). This decomposition reduces the sampling cost from to , and involves a more general result about DPPs defined via a correlation kernel (Lemma 9), which is of independent interest.

To prove Theorem 1, in Section 4 we demonstrate a fundamental connection between an -regularized DPP and Bayesian experimental design with precision matrix . For simplicity of exposition, let the weight vector be uniformly equal . If and is any one of the A/C/D/V-optimality criteria, then:

 (a)  E[fA(X⊤SXS)]≤fA(knΣX)and(b)  E[|S|]≤dnkA+k. (1)

Theorem 1 follows by showing an inequality similar to (1a) when is restricted to subsets of size at most (proof in Section 4). When , then bears a lot of similarity to proportional volume sampling which is an (unregularized) determinantal distribution proposed by [NSTT19]. That work used an inequality similar to (1a) for obtaining -approximate algorithms in A/D-optimal classical experimental design (Question 2 with ). However, the algorithm of [NSTT19] for proportional volume sampling takes time, making it practically infeasible. On the other hand, the time complexity of sampling from is only , and recent advances in RandNLA for DPP sampling [DWH18, DWH19, Der19] suggest that time is also possible. Extending the ideas of [NSTT19] to our new regularized DPP distribution, we obtain efficient -approximation algorithms for A/C/D/V-optimal Bayesian experimental design.

###### Theorem 3

Let be A/C/D/V-optimality and be . If for some , then there is a polynomial time algorithm that finds of size such that

 fA(X⊤SXS)≤(1+ϵ)⋅\textscOPTk.
###### Remark 4

The algorithm referred to in Theorem 3 first solves a convex relaxation of the task via a semi-definite program (SDP) to find the weights , then samples from the distribution times. The expected cost in addition to the SDP is .

Note that unlike in Theorem 3 we use the unrescaled effective dimension instead of the rescaled one, . The actual effective dimension that applies here (given in the proof in Section 4) depends on the SDP solution. It is always upper bounded by , but it may be significantly smaller. This result is a direct extension of [NSTT19] to Bayesian setting and to C/V-optimality criteria. Moreover, in their case, proportional volume sampling is usually the computational bottleneck (because its time dependence on the dimension can reach ), whereas for us the cost of sampling is negligible compared to the SDP. A number of different methods can be used to solve the SDP relaxation (see Section 5). For example, [AZLSW17] suggest using an iterative optimizer called entropic mirror descent, which is known to exhibit fast convergence and can run in time, where is the number of iterations.

## 2 Related work

We first discuss the prior works that focus on bounding the experimental design optimality criteria without obtaining -approximation algorithms. First non-trivial bounds for the classical A-optimality criterion (with ) were shown by [AB13]. Their result implies that for any , and they provide polynomial time algorithms for finding such solutions. The result was later extended by [DW17, DW18b, DW18a] to the case where , obtaining that for any , we have , and also a faster time algorithm was provided. Their result can be easily extended to cover any psd matrix and V/C-optimality (but not D-optimality). The key improvements of our Theorem 1 are that we cover a potentially much wider range of subset sizes, because , and our bound can be much tighter because . Finally, [DCMW19] propose a new notion of minimax experimental design, which is related to A/V-optimality. They also use a determinantal distribution for subset selection, however, due to different assumptions, their bounds are incomparable.

A number of works proposed -approximation algorithms for experimental design which start with solving a convex relaxation of the problem, and then use some rounding strategy to obtain a discrete solution (see Table 1 for comparison). For example, [WYS17] gave an approximation algorithm for classical A/V-optimality with , where the rounding is done in a greedy fashion, and some randomized rounding strategies are also discussed. [NSTT19] suggested proportional volume sampling for the rounding step and obtained approximation algorithms for classical A/D-optimality with . Their approach is particularly similar to ours (when ). However, as discussed earlier, while their algorithms are polynomial, they are virtually intractable. [AZLSW17] proposed an efficient algorithm with a -approximation guarantee for a wide range of optimality criteria, including A/C/D/E/V/G-optimality, both classical and Bayesian, when . Our results improve on this work in two ways: (1) in terms of the dependence on for A/C/D/V-optimality, and (2) in terms of the dependence on the dimension (by replacing with ) in the Bayesian setting. A lower bound shown by [NSTT19] implies that our Theorem 3 cannot be directly extended to E-optimality, but a similar lower bound does not exist for G-optimality. We remark that the approximation approaches relying on a convex relaxation can generally be converted to an upper bound on akin to our Theorem 1, however none of them apply to the regime of , which is of primary interest in the Bayesian setting.

Purely greedy approximation algorithms have been shown to provide guarantees in a number of special cases for experimental design. One example is classical D-optimality criterion, which can be converted to a submodular function [BGS10]. Also, greedy algorithms for Bayesian A/V-optimality criteria have been considered [BBKT17, CR17b]. These methods can only provide a constant factor approximation guarantee (as opposed to

), and the factor is generally problem dependent (which means it could be arbitrarily large). Finally, a number of heuristics with good empirical performance have been proposed, such as Fedorov’s exchange method

[CN80]. However, in this work we focus on methods that provide theoretical approximation guarantees.

## 3 A new regularized determinantal point process

In this section we introduce the determinantal sampling distribution we use for obtaining guarantees in Bayesian experimental design. Determinantal point processes (DPP) form a family of distributions which are used to model repulsion between elements in a random set, with many applications in machine learning

[KT12]. Here, we focus on the setting where we are sampling out of all subsets . Traditionally, a DPP is defined by a correlation kernel, which is an psd matrix with eigenvalues between 0 and 1, i.e., such that . Given a correlation kernel , the corresponding DPP is defined as

 S∼DPPcor(K)% iffPr(T⊆S)=det(KT,T)  ∀T∈[n],

where is the submatrix of with rows and columns indexed by . Another way of defining a DPP, popular in the machine learning community, is via an ensemble kernel . Any psd matrix is an ensemble kernel of a DPP defined as:

 S∼DPPens(L)% iffPr(S)∝det(LS,S).

Crucially, every is also a , but not the other way around. Specifically, we have:

 (a)  DPPcor(K)=DPPens(I−(I+L)−1),and(b)  DPPens(L)=DPPcor(K(I−K)−1),

but (b) requires that be invertible which is not true for some DPPs. (This will be important in our analysis.) The classical algorithm for sampling from a DPP requires the eigendecomposition of either matrix or , which in general costs , followed by a sampling procedure which costs [HKP06, KT12].

We now define our regularized DPP and describe its connection with correlation and ensemble DPPs.

###### Definition 2

Given matrix , a sequence and a psd matrix such that is full rank, let be a distribution over :

 Pr(S)=det(X⊤SXS+A)det(∑ipixix⊤i+A) ⋅∏i∈Spi⋅∏i∉S(1−pi). (2)

The fact that this is a proper distribution (i.e., that it sums to one) can be restated as a determinantal expectation formula: if are independent Bernoulli random variables, then

 ∑S⊆[n]det(X⊤SXS+A)∏i∈Spi∏i∉S(1−pi)=E[det(∑ibixix⊤i+A)](∗)=det(∑iE[bi]xix⊤i+A),

where was shown by [DM19, Lemma 7] and . Our main result in this section is the following efficient algorithm for which reduces it to sampling from a correlation DPP.

###### Theorem 5

For any , and a psd matrix s.t.  is full rank, let

If are independent random variables, then .

###### Remark 6

Since the correlation kernel matrix has rank at most , the preprocessing cost of eigendecomposition is . Then, each sample costs only .

We prove the theorem in three steps. First, we express as an ensemble DPP, which requires some additional assumptions on and to be possible. Then, we convert the ensemble to a correlation kernel (eliminating the extra assumptions), and finally show that this kernel can be decomposed into a rank kernel plus Bernoulli sampling. Sampling     Input:   Compute   Compute SVD of   Sample [HKP06]   Sample for   return

###### Lemma 7

Given , and as in Theorem 5, assume that and are invertible. Then,

 DPPpreg(X,A)=DPPens(˜D+˜D\sfrac12XA−1X⊤˜D\sfrac12),where˜D=Dp(I−Dp)−1.

Proof  Let . By Definition 2 and the fact that ,

 Pr(S) ∝det(X⊤SXS+A)⋅∏i∈Spi⋅∏i∉S(1−pi)=det(X⊤SXS+A)⋅∏i∈Spi1−pi⋅n∏i=1(1−pi) ∝det(A(A−1X⊤SXS+I))det(˜DS,S)=det(A)det(A−1X⊤SXS+I)det(˜DS,S) ∝det(XSA−1X⊤S+I)det(˜DS,S)=det([˜D\sfrac12XA−1X⊤˜D\sfrac12+˜D]S,S),

which matches the definition of the L-ensemble DPP.
At this point, to sample from , we could simply invoke any algorithm for sampling from an ensemble DPP. However, this would only work for invertible , which in particular excludes the important case of corresponding to classical experimental design. Moreover, the standard algorithm would require computing the eigendecomposition of the ensemble kernel, which (at least if done naïvely) costs . Even after this is done, the sampling cost would still be which can be considerably more than . We first address the issue of invertibility of matrix by expressing our distribution via a correlation DPP.

###### Lemma 8

Given , , and as in Theorem 5 (without any additional assumptions), we have

 DPPpreg(X,A)=DPPcor(Dp+(I−Dp)\sfrac12D\sfrac12pX(A+X⊤DpX)−1X⊤D\sfrac12p(I−Dp)\sfrac12).

When and are invertible, then the proof (given in Appendix A) is a straightforward calculation. Then, we use a limit argument with and , where .

Finally, we show that the correlation DPP arrived at in Lemma 8 can be decomposed into a smaller DPP plus Bernoulli sampling. In fact, in the following lemma we obtain a more general recipe for combining DPPs with Bernoulli sampling, which may be of independent interest. Note that if are independent random variables then .

###### Lemma 9

Let and be psd matrices with eigenvalues between 0 and 1, and assume that is diagonal. If  and , then

 T∪R∼DPPcor(D+(I−D)\sfrac12K(I−D)\sfrac12).

The lemma is proven in Appendix A. Theorem 5 now follows by combining Lemmas 8 and 9.

## 4 Guarantees for Bayesian experimental design

In this section we prove our main results regarding Bayesian experimental design (Theorems 1 and 3). First, we establish certain properties of the regularized DPP distribution that make it effective in this setting. Even though the size of the sampled subset is random and can be as large as , it is also highly concentrated around its expectation, which can be bounded in terms of the -effective dimension. This is crucial, since both of our main results require a subset of deterministically bounded size. Recall that the effective dimension is defined as a function . The omitted proofs are in Appendix A.

###### Lemma 10

Given any , and a psd matrix s.t.  is full rank, let be defined as in Theorem 5. Then

 E[|S|]≤E[|T|]+E[∑ibi]=dA(∑ipixix⊤i)+∑ipi.

Next, we show two expectation inequalities for the matrix inverse and matrix determinant, which hold for the regularized DPP. We use them to bound the Bayesian optimality criteria in expectation.

###### Lemma 11

Whenever is a well-defined distribution it holds that

 E[(X⊤SXS+A)−1] ⪯(∑ipixix⊤i+A)−1, (3) E[det(X⊤SXS+A)−1] ≤det(∑ipixix⊤i+A)−1. (4)
###### Corollary 12

Let be A/C/D/V-optimality. Whenever is well-defined,

 E[fA(X⊤SXS)]≤fA(∑ipixix⊤i).

Proof  In the case of A-, C-, and V-optimality, the function

is a linear transformation of the matrix

so the bound follows from (3). For D-optimality, we apply (4) as follows:

 E[fA(X⊤SXS)] =E[det(X⊤SXS+A)−1]\sfrac1d=det(∑ipixix⊤i)−\sfrac1d,

which completes the proof.
Finally, we present the key lemma that puts everything together. This result is essentially a generalization of Theorem 1 from which also follows Theorem 3.

###### Lemma 13

Let be A/C/D/V-optimality and be . For some , let and assume that . If , then a subset of size can be found in time that satisfies

Proof  Let be defined so that , and suppose that . Then, using Theorem 11, we have

 Pr(|S|≤k)E[fA(X⊤SXS)∣|S|≤k] ≤E[fA(X⊤SXS)]≤fA(∑ipixix⊤i) ≤(1+ϵ)⋅fA(∑iwixix⊤i).

Using Lemma 10 we can bound the expected size of as follows:

 E[|S|]

Let and . If , then . Since is a determinantal point process, is a Poisson binomial r.v. so for ,

 Pr(|S|>k)≤e−(k−αk)22k=e−k2(1−α)2≤e−kϵ232≤dwk.

For any , we have

 E[fA(X⊤SXS)∣|S|≤k] ≤1+ϵ1−dwk⋅fA(Σw)≤(1+ϵ+dwk1−dwk)⋅fA(Σw) ≤(1+7dwk+8√\definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1∣∣ln(k/dw)k)⋅fA(Σw).

Denoting as , Markov’s inequality implies that

 Pr(fA(X⊤SXS)≥(1+δ)Fk ∣ |S|≤k)≤11+δ.

Also, we showed that . Setting for sufficiently large

we obtain that with probability

, the random set has size at most and

 fA(X⊤SXS) ≤(1+dwCk)⋅(1+7dwk+8√\definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1∣∣ln(k/dw)k)⋅fA(Σw) ≤(1+8dwk+8√\definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1∣∣ln(k/dw)k)⋅fA(Σw).

We can sample from conditioned on and bounded as above by rejection sampling. When , the set is completed to with arbitrary indices. On average, samples from are needed, so the cost is for the eigendecomposition, for sampling and for recomputing .
To prove the main results, we use Lemma 13 with appropriately chosen weights .

Proof of Theorem 1  Let in Lemma 13. Then, we have and also . Since for any set of size , we have , the result follows.

Proof of Theorem 3  As discussed in [AZLSW17, BV04], the following convex relaxation of experimental design can be written as a semi-definite program and solved using standard SDP solvers:

 w∗ = argminw  fA(n∑i=1wixix⊤i),subject to∀i  0≤wi≤1,∑iwi=k. (5)

The solution satisfies . If we use in Lemma 13, then observing that , and setting for sufficiently large , the algorithm in the lemma finds subset such that