# On Computationally Tractable Selection of Experiments in Measurement-Constrained Regression Models

We derive computationally tractable methods to select a small subset of experiment settings from a large pool of given design points. The primary focus is on linear regression models, while the technique extends to generalized linear models and Delta's method (estimating functions of linear regression models) as well. The algorithms are based on a continuous relaxation of an otherwise intractable combinatorial optimization problem, with sampling or greedy procedures as post-processing steps. Formal approximation guarantees are established for both algorithms, and numerical results on both synthetic and real-world data confirm the effectiveness of the proposed methods.

## Authors

• 42 publications
• 11 publications
• 57 publications
• ### The Tilted Beta Binomial Linear Regression Model: a Bayesian Approach

This paper proposes new linear regression models to deal with overdisper...
11/25/2019 ∙ by María Victoria Cifuentes-Amado, et al. ∙ 0

• ### Selection of tuning parameters in bridge regression models via Bayesian information criterion

We consider the bridge linear regression modeling, which can produce a s...
03/20/2012 ∙ by Shuichi Kawano, et al. ∙ 0

• ### Optimal Sampling for Generalized Linear Models under Measurement Constraints

Suppose we are using a generalized linear model to predict a scalar outc...
07/17/2019 ∙ by Tao Zhang, et al. ∙ 0

• ### Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation

Volterra and polynomial regression models play a major role in nonlinear...
03/03/2011 ∙ by Vassilis Kekatos, et al. ∙ 0

• ### Accounting for Significance and Multicollinearity in Building Linear Regression Models

We derive explicit Mixed Integer Optimization (MIO) constraints, as oppo...
02/08/2019 ∙ by Dimitris Bertsimas, et al. ∙ 0

• ### Efficient Image Set Classification using Linear Regression based Image Reconstruction

We propose a novel image set classification technique using linear regre...
01/10/2017 ∙ by Syed Afaq Ali Shah, et al. ∙ 0

• ### Multivariate Regression with Grossly Corrupted Observations: A Robust Approach and its Applications

This paper studies the problem of multivariate linear regression where a...
01/11/2017 ∙ by Xiaowei Zhang, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Despite the availability of large datasets, in many applications, collecting labels for all data points is not possible due to measurement constraints. We consider the problem of measurement-constrained regression where we are given a large pool of data points but can only observe a small set of labels. Classical experimental design approaches in statistical literature Pukelsheim (1993) have investigated this problem, but the proposed solutions tend to be often combinatorial. In this work, we investigate computationally tractable methods for selecting data points to label from a given pool of design points in measurement-constrained regression.

Despite the simplicity and wide applicability of OLS, in practice it may not be possible to obtain the full

-dimensional response vector

due to measurement constraints. It is then a common practice to select a small subset of rows (e.g., rows) in so that the statistical efficiency of regression on the selected subset of design points is maximized. Compared to the classical experimental design problem (Pukelsheim, 1993) where can be freely designed, in this work we consider the setting where the selected design points must come from an existing (finite) design pool .

Below we list three example applications where such measurement constraints are relevant:

##### Example 1 (Material synthesis)

Material synthesis experiments are time-consuming and expensive, whose results are sensitive to experimental setting features such as temperature, duration and reactant ratio. Given hundreds, or even thousands of possible experimental settings, it is important to select a handful of representative settings such that a model can be built with maximized statistical efficiency to predict quality of the outcome material from experimental features. In this paper we consider such an application of low-temperature microwave-assisted thin film crystallization (Reeja-Jayan et al., 2012; Nakamura et al., 2017) and demonstrate the effectiveness of our proposed algorithms.

##### Example 2 (CPU benchmarking)

Central processing units (CPU) are vital to the performance of a computer system. It is an interesting statistical question to understand how known manufacturing parameters (clock period, cache size, etc.) of a CPU affect its execution time (performance) on benchmark computing tasks (Ein-Dor and Feldmesser, 1987). As the evaluation of real benchmark execution time is time-consuming and costly, it is desirable to select a subset of CPUs available in the market with diverse range of manufacturing parameters so that the statistical efficiency is maximized by benchmarking for the selected CPUs.

##### Example 3 (Wind speed prediction)

In Chen et al. (2015) a data set is created to record wind speed across a year at established measurement locations on highways in Minnesota, US. Due to instrumentation constraints, wind speed can only be measured at intersections of high ways, and a small subset of such intersections is selected for wind speed measurement in order to reduce data gathering costs.

In this work, we primarily focus on the linear regression model (though extensions to generalized linear models and functions of linear models are also considered later)

 y=Xβ0+ε,

where is the design matrix, is the response and

are homoscedastic Gaussian noise with variance

. is a -dimensional regression model that one wishes to estimate. We consider the “large-scale, low-dimensional” setting where both and , and has full column rank. A common estimator is the Ordinary Least Squares (OLS) estimator:

 ˆβols=argminβ∈Rp∥y−Xβ∥22=(X⊤X)−1X⊤y.

The mean square error of the estimated regression coefficients . Under measurement constraints, it is well-known that the statistically optimal subset for estimating the regression coefficients is given by the A-optimality criterion (Pukelsheim, 1993):

 S∗=argminS⊆[n],|S|≤ktr[(X⊤SXS)−1]. (1)

Despite the statistical optimality of Eq. (1), the optimization problem is combinatorial in nature and the optimal subset is difficult to compute. A brute-force search over all possible subsets of size requires operations, which is infeasible for even moderately sized designs .

In this work, we focus on computationally tractable methods for experiment selection that achieve near-optimal statistical efficiency in linear regression models. We consider two experiment selection models: the with replacement model where each design point (row of ) can be selected more than once with independent noise involved for each selection, and the without replacement model where distinct row subsets are required. We propose two computationally tractable algorithms: one sampling based algorithm that achieves approximation of the statistically optimal solution for the with replacement model and, when is well-conditioned, the algorithm also works for the without replacement model. In the “soft budget” setting , the approximation ratio can be further improved to . We also propose a greedy method that achieves approximation for the without replacement model regardless of the conditioning of the design pool .

## 2 Problem formulation and backgrounds

We first give a formal definition of the experiment selection problem in linear regression models: [experiment selection problem in linear regression models] Let be a known design matrix with full column rank and be the subset budget, with . An experiment selection problem aims to find a subset of size , either deterministically or randomly, then observes , where each coordinate of

is i.i.d. Gaussian random variable with zero mean and equal covariance, and then generates an estimate

of the regression coefficients based on . Two types of experiment selection algorithms are considered:

1. With replacement: is a multi-set which allows duplicates of indices. Note that fresh (independent) noise is imposed after, and independent of, experiment selection, and hence duplicate design points have independent noise. We use to denote the class of all with replacement experiment selection algorithms.

2. Without replacement: is a standard set that may not have duplicate indices. We use to denote the class of all without replacement experiment selection algorithms.

As evaluation criterion, we consider the mean square error , where is an estimator of with and as inputs. We study computationally tractable algorithms that approximately achieves the minimax rate of convergence over or :

 (2)

Formally, we give the following definition: [-approximate algorithm] Fix and . We say an algorithm (either deterministic or randomized) is a -approximate algorithm if for any with full column rank, produces a subset with size and an estimate in polynomial time such that, with probability at least 0.8,

 supβ0∈RpE[∥ˆβA−β0∥22∣∣XS]≤C(n,p,k)⋅infA′∈Ab(k)supβ0∈RpE[∥ˆβA′−β0∥22]. (3)

Here both expectations are taken over the randomness in the noise variables and the inherent randomness in . Table 1 gives an overview of approximation ratio for algorithms proposed in this paper. We remark that the combinatorial A-optimality solution of Eq. (1) upper bounds the minimax risk (since minimaxity is defined over deterministic algorithms as well), hence the approximation guarantees also hold with respect to the combinatorial A-optimality objective.

### 2.1 Related work

There has been an increasing amount of work on fast solvers for the general least-square problem . Most of existing work along this direction (Woodruff, 2014; Dhillon et al., 2013; Drineas et al., 2011; Raskutti and Mahoney, 2015) focuses solely on the computational aspects and do not consider statistical constraints such as limited measurements of . A convex optimization formulation was proposed in Davenport et al. (2015) for a constrained adaptive sensing problem, which is a special case of our setting, but without finite sample guarantees with respect to the combinatorial problem. In Horel et al. (2014) a computationally tractable approximation algorithm was proposed for the D-optimality criterion of the experimental design problem. However, the core idea in Horel et al. (2014) of pipage rounding an SDP solution (Ageev and Sviridenko, 2004) is not applicable in our problem because the objective function we consider in Eq. (4) is not submodular.

Popular subsampling techniques such as leverage score sampling (Drineas et al., 2008) were studied in least square and linear regression problems (Zhu et al., 2015; Ma et al., 2015; Chen et al., 2015). While computation remains the primary subject, measurement constraints and statistical properties were also analyzed within the linear regression model (Zhu et al., 2015). However, none of the above-mentioned (computationally efficient) methods achieve near minimax optimal statistical efficiency in terms of estimating the underlying linear model , since the methods can be worse than uniform sampling which has a fairly large approximation constant for general . One exception is Avron and Boutsidis (2013), which proposed a greedy algorithm that achieves an error bound of estimation of that is within a multiplicative factor of the minimax optimal statistical efficiency. The multiplicative factor is however large and depends on the size of the full sample pool , as we remark in Table 1 and also in Sec. 3.4.

Another related area is active learning (Chaudhuri et al., 2015; Hazan and Karnin, 2015; Sabato and Munos, 2014), which is a stronger setting where feedback from prior measurements can be used to guide subsequent data selection. Chaudhuri et al. (2015) analyzes an SDP relaxation in the context of active maximum likelihood estimation. However, the analysis in Chaudhuri et al. (2015) only works for the with replacement model and the two-stage feedback-driven strategy proposed in Chaudhuri et al. (2015) is not available under the experiment selection model defined in Definition 2 where no feedback is assumed.

### 2.2 Notations

For a matrix , we use to denote the induced -norm of . In particular, and . denotes the Frobenius norm of . Let

be the singular values of

, sorted in descending order. The condition number is defined as . For sequences of random variables and , we use to denote converges in probability to . We say if and if . For two -dimensional symmetric matrices and , we write if for all , and if for all .

## 3 Methods and main results

We describe two computationally feasible algorithms for the experiment selection problem, both based on a continuous convex optimization problem. Statistical efficiency bounds are presented for both algorithms, with detailed proofs given in Sec. 7.

### 3.1 Continuous optimization and minimax lower bounds

We consider the following continuous optimization problem, which is a convex relaxation of the combinatorial A-optimality criterion of Eq. (1):

 π∗ =argminπ∈Rnf(π;X)=argminπ∈Rptr[(X⊤diag(π)X)−1], (4) s.t.π≥0,∥π∥1≤k, ∥π∥∞≤1(only for the % without replacement model).

Note that the constraint is only relevant for the without replacement model and for the with replacement model we drop this constraint in the optimization problem. It is easy to verify that both the objective and the feasible set in Eq. (4) are convex, and hence the global optimal solution of Eq. (4) can be obtained using computationally tractable algorithms. In particular, we describe an SDP formulation and a practical projected gradient descent algorithm in Appendix B and D, both provably converge to the global optimal solution of Eq. (4) with running time scaling polynomially in and .

We first present two facts, which are proved in Sec. 7.

###### Fact 3.1

Let and be feasible solutions of Eq. (4) such that for all . Then , with equality if and only if .

###### Fact 3.2

.

We remark that the inverse monotonicity of in implies second fact, which can potentially be used to understand sparsity of in later sections.

The following theorem shows that the optimal solution of Eq. (4) lower bounds the minimax risk defined in Eq. (2). Its proof is placed in Sec. 7. Let and be the optimal objective values of Eq. (4) for with replacement and without replacement, respectively. Then for ,

 infA∈Ab(k)supβ0∈RpE[∥ˆβA−β0∥22]≥σ2⋅f∗b(k;X). (5)

Despite the fact that Eq. (4) is computationally tractable, its solution is not a valid experiment selection algorithm under a measurement budget of because there can be much more than components in that are not zero. In the following sections, we discuss strategies for choosing a subset of rows with (i.e., soft constraint) or (i.e., hard constraint), using the solution to the above.

### 3.2 Sampling based experiment selection: soft size constraint

We first consider a weaker setting where soft constraint is imposed on the size of the selected subset ; in particular, it is allowed that with high probability, meaning that a constant fraction of over-selection is allowed. The more restrictive setting of hard constraint is treated in the next section.

A natural idea of obtaining a valid subset of size is by sampling from a weighted row distribution specified by . Let and define distributions and for as

 P(1):p(1)j=π∗jx⊤jΣ−1∗xj/p, with replacement; P(2):p(2)j=π∗j/k, without replacement.

Note that both and sum to one because and .

Under the without replacement setting the distribution is straightforward: is proportional to the optimal continuous weights ; under the with replacement setting, the sampling distribution takes into account leverage scores (effective resistance) of each data point in the conditioned covariance as well. Later analysis (Theorem 3.2) shows that it helps with the finite-sample condition on . Figure 1 gives details of the sampling based algorithms for both with and without replacement settings.

The following proposition bounds the size of in high probability: For any with probability at least it holds that . That is, . Apply Markov’s inequality and note that an additional samples need to be added due to the ceiling operator in with replacement sampling.

The sampling procedure is easy to understand in an asymptotic sense: it is easy to verify that and , for both with and without replacement settings. Note that by feasibility constraints and hence is a valid distribution for all

. For the with replacement setting, by weak law of large numbers,

as and hence by continuous mapping theorem. A more refined analysis is presented in Theorem 3.2 to provide explicit conditions under which the asymptotic approximations are valid and on the statistical efficiency of as well as analysis under the more restrictive without replacement regime.

Fix as an arbitrarily small accuracy parameter. Suppose the following conditions hold:

 With replacement:plogk/k=O(ϵ2); Without replacement:∥Σ−1∗∥2κ(Σ∗)∥X∥2∞logp=O(ϵ2).

Here and denotes the conditional number of . Then with probability at least the subset OLS estimator satisfies

 E[∥ˆβ−β0∥22∣∣XˆS]=σ2tr[(X⊤ˆSXˆS)−1]≤(1+O(ϵ))⋅σ2f∗b(X;k),b∈{1,2}.

We adapt the proof technique of Spielman and Srivastava in their seminal work on spectral sparsification of graphs (Spielman and Srivastava, 2011). More specifically, we prove the following stronger “two-sided” result which shows that is a spectral approximation of with high probability, under suitable conditions.

Under the same conditions in Theorem 3.2, it holds that with probability at least that

 (1−ϵ)z⊤Σ∗z≤z⊤ˆΣˆSz≤(1+ϵ)z⊤Σ∗z,∀z∈Rp,

where and .

Lemma 3.2 implies that with high probability for all . Recall that and . Subsequently, for for some constant , . Theorem 3.2 is thus proved.

### 3.3 Sampling based experiment selection: hard size constraint

In some applications it is mandatory to respect a hard subset size constraint; that is, a randomized algorithm is expected to output that satisfies almost surely, and no over-sampling is allowed. To handle such hard constraints, we revise the algorithm in Figure 1 as follows:

We have the following theorem, which mimics Theorem 3.2 but with weaker approximation bounds: Suppose the following conditions hold:

 with replacement:plogk/k=O(1); without replacement:∥Σ−1∗∥2κ(Σ∗)∥X∥2∞logp=O(1).

Then with probability at least 0.8 the subset estimator satisfies

 E[∥ˆβ−β0∥22∣∣XˆS]=σ2tr[(X⊤ˆSXˆS)−1]≤O(1)⋅σ2f∗b(X;k),b∈{1,2}.

The following lemma is key to the proof of Theorem 3.3. Unlike Lemma 3.2, in Lemma 6 we only prove one side of the spectral approximation relation, which suffices for our purposes. To handle without replacement, we cite matrix Bernstein for combinatorial matrix sums in (Mackey et al., 2014). Define . Suppose the following conditions hold:

 With replacement:plogT/T=O(1); Without replacement:∥Σ−1∗∥2κ(Σ∗)∥X∥2∞logp=O(T/n).

Then with probability at least the following holds:

 z⊤ˆΣˆSz≥KTz⊤Σ∗z,∀z∈Rp, (6)

where for with replacement and for without replacement.

Finally, we need to relate conditions on in Lemma 6 to interpretable conditions on subset budget : Let be an arbitrarily small fixed failure probability. The with probability at least we have that for with replacement and for without replacement. For with replacement we have and for without replacement we have . Applying Markov’s inequality on for and/or we complete the proof of Lemma 2. Combining Lemmas 6 and 2 with and note that almost surely (because ), we prove Theorem 3.3.

### 3.4 Greedy experiment selection

Avron and Boutsidis (2013) proposed an interesting greedy removal algorithm (outlined in Figure 3) and established the following result: Suppose of size is obtained by running algorithm in Figure 3 with an initial subset , . Both and are standard sets (i.e., without replacement). Then

 tr[(X⊤ˆSXˆS)−1]≤|S0|−p+1k−p+1tr[(X⊤S0XS0)−1].

In Avron and Boutsidis (2013) the greedy removal procedure in Figure 3 is applied to the entire design set , which gives approximation guarantee . This results in an approximation ratio of as defined in Eq. (2), by applying the trivial bound , which is tight for a design that has exactly non-zero rows.

To further improve the approximation ratio, we consider applying the greedy removal procedure with equal to the support of ; that is, . Because under the without replacement setting, we have the following corollary: Let be the support of and suppose . Then

 tr[(X⊤ˆSXˆS)−1]≤∥π∗∥0−p+1k−p+1f(π∗;X)=∥π∗∥0−p+1k−p+1f∗2(k;X).

It is thus important to upper bound the support size . With the trivial bound of we recover the approximation ratio by applying Figure 3 to . In order to bound away from , we consider the following assumption imposed on :

###### Assumption 3.1

Define mapping as , where denotes the th coordinate of a -dimensional vector and if and otherwise. Denote as the affine version of . For any distinct rows of , their mappings under are linear independent.

Assumption 3.1 is essentially a general-position assumption, which assumes that no design points in lie on a degenerate affine subspace after a specific quadratic mapping. Like other similar assumptions in the literature (Tibshirani, 2013), Assumption 3.1 is very mild and almost always satisfied in practice, for example, if each row of is independently sampled from absolutely continuous distributions.

We are now ready to state the main lemma bounding the support size of . if Assumption 3.1 holds.

Lemma 3.1 is established by an interesting observation into the properties of Karush-Kuhn-Tucker (KKT) conditions of the optimization problem Eq. (4), which involves a linear system with variables. The complete proof of Lemma 3.1 is given in Sec. 7.5. To contrast the results in Lemma 3.1

with classical rank/support bounds in SDP and/or linear programming (e.g. the Pataki’s bound

(Pataki, 1998)), note that the number of constraints in the SDP formulation of Eq. (4) (see also Appendix B) is linear in , and hence analysis similar to (Pataki, 1998) would result in an upper bound of that scales with , which is less useful for our analytical purpose.

Combining results from both Lemma 3.1 and Corollary 3.4 we arrive at the following theorem, which upper bounds the approximation ratio of the greedy removal procedure in Figure 3 initialized by the support of . Let be the optimal solution of the without replacement version of Eq. (4) and be the output of the greedy removal procedure in Figure 3 initialized with . If and Assumption 3.1 holds then the subset OLS estimator satisfies

 E[∥ˆβ−β0∥∣∣XˆS]=σ2tr[(X⊤ˆSXˆS)−1]≤(1+p(p+1)2(k−p+1))f∗2(k;X).

Under a slightly stronger condition that , the approximation ratio can be simplified to . In addition, if , meaning that near-optimal experiment selection is achievable with computationally tractable methods if design points are allowed in the selected subset.

### 3.5 Interpretable subsampling example: anisotropic Gaussian design

Even though we consider a fixed pool of design points so far, here we use an anisotropic Gaussian design example to demonstrate that a non-uniform sampling can outperform uniform sampling even under random designs, and to interpret the conditions required in previous analysis. Let

be i.i.d. distributed according to an isotropic Gaussian distribution

.

We first show that non-uniform weights could improve the objective . Let be the uniformly weighted solution of , corresponding to selecting each row of uniformly at random. We then have

 f(πunif;X)=1ktr[(1nX⊤X)−1]p→1ktr(Σ−10).

On the other hand, let for some universal constant and , . By Markov inequality, . Define weighted solution as normalized such that . 111Note that may not be the optimal solution of Eq. (4); however, it suffices for the purpose of the demonstration of improvement resulting from non-uniform weights. Then

 f(πw;X)=1ktr[(1n∑xi∈X∩Bxix⊤ip(xi|B))−1]1n∑xi∈X∩B1/p(xi|B)p→1kn|X∩B|tr[(∫Bxx⊤dx)−1]∫B1dx≲p2(1−γ)tr(Σ0).

Here in the last inequality we apply Lemma A. Because by Jensen’s inequality, we conclude that in general , and the gap is larger for ill-conditioned covariance . This example shows that uneven weights in helps reducing the trace of inverse of the weighted covariance .

Under this model, we also simplify the conditions for the without replacement model in theorem 3.2 and 3.3. Because , it holds that . In addition, by simple algebra . Using a very conservative upper bound of by sampling rows in uniformly at random and apply weak law of large numbers and the continuous mapping theorem, we have that . In addition, . Subsequently, the condition is implied by

 pκ(Σ∗)2κ(Σ0)logplognk=O(ϵ2). (7)

Essentially, the condition is reduced to . The linear dependency on is necessary, as we consider the low-dimensional linear regression problem and would imply an infinite mean-square error in estimation of . We also remark that the condition is scale-invariant, as and share the same quantity .

### 3.6 Extensions

We discuss possible extension of our results beyond estimation of in the linear regression model.

#### 3.6.1 Generalized linear models

In a generalized linear model satisfies for some known link function . Under regularity conditions (Van der Vaart, 2000), the maximum-likelihood estimator satisfies , where is the Fisher’s information matrix:

 I(X,β0)=−n∑i=1E∂2logp(yi|xi;β0)∂β∂β⊤=−n∑i=1(E∂2logp(yi;ηi)∂η2i)xix⊤i. (8)

Here both expectations are taken over conditioned on and the last equality is due to the sufficiency of . The experiment selection problem is then formulated to select a subset of size , either with or without duplicates, that minimizes .

It is clear from Eq. (8) that the optimal subset depends on the unknown parameter , which itself is to be estimated. This issue is known as the design dependence problem for generalized linear models (Khuri et al., 2006). One approach is to consider locally optimal designs (Khuri et al., 2006; Chernoff, 1953), where a consistent estimate of is first obtained on an initial design subset 222Notice that a consistent estimate can be obtained using much fewer points than an estimate with finite approximation guarantee. and then is supplied to compute a more refined design subset to get the final estimate . With the initial estimate available, one may apply transform defined as

 ˜xi=√−E∂2logp(yi;ˇηi)∂η2xi.

Note that under regularity conditions