Removal of the points that do not support an E-optimal experimental design

08/02/2018
by   Radoslav Harman, et al.
0

We propose a method of removal of design points that cannot support any E-optimal experimental design of a linear regression model with uncorrelated observations. The proposed method can be used to reduce the size of some large E-optimal design problems such that they can be efficiently solved by semidefinite programming. This paper complements the results of Pronzato [Pronzato, L., 2013. A delimitation of the support of optimal designs for Kiefer's ϕ_p-class of criteria. Statistics & Probability Letters 83, 2721--2728], who studied the same problem for analytically simpler criteria of design optimality.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/25/2021

Efficient computational algorithms for approximate optimal designs

In this paper, we propose two simple yet efficient computational algorit...
06/10/2019

Bayesian experimental design using regularized determinantal point processes

In experimental design, we are given n vectors in d dimensions, and our ...
06/21/2021

Circuits for robust designs

This paper continues the application of circuit theory to experimental d...
09/07/2017

Computing optimal experimental designs with respect to a compound Bayes risk criterion

We consider the problem of computing optimal experimental design on a fi...
11/14/2017

Near-Optimal Discrete Optimization for Experimental Design: A Regret Minimization Approach

The experimental design problem concerns the selection of k points from ...
03/20/2020

Design admissibility and de la Garza phenomenon in multi-factor experiments

The determination of an optimal design for a given regression problem is...
11/04/2017

Approximate Supermodularity Bounds for Experimental Design

This work provides performance guarantees for the greedy solution of exp...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider the problem of optimal experimental design (e.g., Pázman (1986), Pukelsheim (1993), Atkinson et al. (2007), Fedorov and Leonov (2014)). Let be a finite design space of size and let the information of a trial under experimental conditions (in design point ) be expressed by some matrix , the so called elementary information matrix for . The symbol denotes the set of all nonnegative definite matrices. Let be the set of all approximate designs (i.e., probability measures) on ; and for any , its information matrix is

(1)

Definition (1) covers, for instance, the information matrix in the linear regression model , , where

is the vector of unknown parameters,

is the regression function, and the design points belong to . In such a case, the elementary information matrix for is . However, this general approach also covers, e.g., the linear regression models with multiple responses in each trial, and it can also be used for optimal augmentation of existing designs as demonstrated in Section 6 of Harman and Trnovská (2009). The general form of the problem (1) can also be utilized for the construction of constrained optimal designs, see the discussion and references in Harman (2014).

The main result of this paper (Theorem 1) holds also for uncountable compact , only becomes the set of all finitely supported discrete measures on , and the sum in (1) goes only through , such that . However, for the clarity of the presentation, we work with the discrete case.

For an information function (see Pukelsheim (1993)), a design that maximizes is said to be -optimal. The symbol denotes the set of all nonnegative definite matrices. A common class of information functions are the so called Kiefer’s -optimality criteria for (e.g., see Pukelsheim (1993), Chapter 6):

for nonsingular , where

are the eigenvalues of

, and if is singular. These criteria include the prominent -, - and -optimality (, respectively). For simplicity, we assume that there exists a design , such that is nonsingular. It follows that any -optimal design is nonsingular (i.e., such that its information matrix is nonsingular).

The performance of algorithms for computing -optimal designs can be improved by iteratively reducing the size of . For some criteria it was shown that any nonsingular design (e.g., obtained in the th iteration of an algorithm) can be used to construct an inequality that must be satisfied by any design point supporting the -optimal design. Therefore, design points not satisfying this inequality can be removed from . Generally, the closer the design is to the -optimal design, the more design points can be deleted. Early works pioneering this method for -optimality were Harman (2003) and Pronzato (2003). Harman and Pronzato (2007) provided the currently best ‘deletion method’ for -optimality. Pronzato (2013) formulated a method for removing design points for -optimality criteria, ; thus covering A-optimality (). In this paper, we seek to cover -optimality ().

-optimality possesses natural statistical interpretations (it protects against the worst variance of a linear function

over all , cf. (Pukelsheim, 1993, Section 6.4); and it minimizes the length of the largest principal axis of the confidence ellipsoid for , cf. (Fedorov and Leonov, 2014, Section 2.2.1)), and is one of the most important optimality criteria ((Pukelsheim, 1993, Section 6.1), (Atkinson et al., 2007, Section 10.1)). Nevertheless, there is a smaller number of results on -optimality, compared to the or criteria. This follows in part from the analytical and computational difficulties in dealing with -optimality: unlike other -criteria, it is neither strictly concave nor differentiable. For example, there are only a few algorithms for calculating -optimal designs. Currently, the -optimal designs are usually computed by semidefinite programming (SDP) methods (Vandenberghe and Boyd (1999)), by the cutting plane method (Pronzato and Pázman (2013), Section 9.5) or by some general-purpose algorithms of non-differentiable optimization. However, these methods can be efficiently applied only for relatively small sizes of the design space. The lack of strict concavity implies that the -optimal information matrix (the information matrix of an -optimal design) is generally not unique. The lack of differentiability makes most optimal design algorithms inapplicable for -optimality, which is a known issue with maximin criteria, see Mandal et al. (2015). Note that -optimality can be viewed as a maximin criterion, because it can be expressed as . Because of its importance, there is still a sizeable amount of theoretical results on -optimality, e.g., Pukelsheim and Studden (1993), Dette and Studden (1993), Dette et al. (2006), Dette and Grigoriev (2014).

The size of the design space and the (fixed) number of model parameters determine the dimensionality of the optimization problem. The difficulties with computing -optimal designs make reducing the complexity of the optimization problem by removing unnecessary design points especially useful. Indeed, as will be shown in Section 3, the proposed deletion method allows for applying the known algorithms for -optimality on a larger class of problems. However, the lack of differentiability and strict concavity also means that the deletion method for -optimality requires special attention, as noted by Pronzato (2013). These characteristics of -optimality also lead to a slightly more complicated and less powerful deletion method compared to those for other -criteria.

If , a deletion method for general optimality criterion is based on the Elfving set (cf. Theorem 8.5 by Pukelsheim (1993)): design points that are not extreme points of can be removed for any information function without losing any -optimal designs. One may determine if a given

can be deleted by checking feasibility of the linear program:

s.t.

for arbitrary , assuming that for all , . Note that the deletion method based on Elfving set does not depend on a given design (e.g., it should be performed before an algorithm is run as there is no benefit in using it during the iteration process), and it is rather slow – to apply this deletion method, one needs to perform feasibility checks for linear programs.

2 Necessary condition for support points

The provided method, as well as those of Harman and Pronzato (2007) and Pronzato (2013), relies on the Equivalence theorem (see Pukelsheim (1993), Chapter 7). The subgradients of in a nonsingular matrix are of the form , where

are orthonormal eigenvectors corresponding to

and are some nonnegative weights that sum to 1. Hence, the Equivalence theorem for -optimality on a set of information matrices becomes (see, e.g., Pukelsheim (1993), Theorem 7.21):

Lemma 1.

Let , such that is nonsingular. Then is -optimal in if and only if there exists a nonnegative definite matrix with such that for all . In the case of optimality, for any that is -optimal.

In fact, the matrix is given by for some weights and eigenvectors as described in the previous paragraph.

The Equivalence theorem can be slightly adapted for considered in this paper.

Corollary 1.

Let and let be nonsingular. Then is -optimal if and only if there exists a nonnegative definite matrix with such that for all . In the case of optimality, for any that supports any -optimal design.

Proof.

Let . To prove the first part, it suffices to observe that if for all , we also have for any .

Suppose that and are -optimal, and that satisfies Lemma 1 with . Then and for any . Then

To obtain equality in the last inequality, must be satisfied for any supporting . ∎

The main result of this paper follows.

Theorem 1.

Let be a design with a nonsingular information matrix and let . Take any number, say , of normalized eigenvectors of and any , such that , , and set

Then . If , then is -optimal. If , then

for any supporting an -optimal design and for any , where are the orthonormal eigenvectors corresponding to .

Proof.

First, observe that

(2)

for any and any information matrix , . Therefore,

Moreover, . Let . Then the inequalities become equalities; in particular, . Therefore, Corollary 1 yields that is -optimal with .

Now, let and let support an -optimal design . Let us denote and . Then there exists , such that , and . It follows that, using 2,

Suppose that and let , where

denotes the identity matrix. Then,

is positive definite and

because . Thus,

(3)

because the Frobenius norm is sub-multiplicative. The constraints on and the inequality guarantee that ; hence (3) yields . The spectral decomposition then gives . ∎

Therefore, using any nonsingular design , one can remove all design points that satisfy for some , given by Theorem 1. For the usual case of the linear regression, where , we have and

In the following, we assume that we have a design with nonsingular and , and that . One should try to make the values of as low as possible, so that more ’s can be deleted. Fortunately, minimizing with respect to is a (one-dimensional) convex problem. Hence, the calculation of optimal is very fast.

Lemma 2.

The function is convex in on .

Proof.

The lemma can easily be proved by calculating the second derivative . ∎

If the derivative in , which is

is not less than 0, then the optimal is . This is equivalent to

For example, if , then for any , and we always set . Otherwise, we seek such that This is in fact the problem of finding a root on of a polynomial of degree at most .

The following lemma shows that the minimization of implies choosing also as low as possible. First, denote over .

Lemma 3.

If , then for any .

Proof.

The proof is straightforward: by computing the derivative of with respect to and observing that the set is ‘increasing’ with . ∎

For a given choice of the eigenvectors , the weights minimizing can be obtained by a simple linear program:

(4)
s.t.

where is the vector of zeros. Therefore, the proposed method entails solving one linear program (unlike the deletion method based on the Elfving set, which requires solving linear programs) and one-dimensional convex optimizations.

If has distinct eigenvalues, the normalized eigenvectors are fixed (up to a reflection around origin), so these eigenvectors should be chosen for calculating . However, if some eigenvalue of has multiplicity greater than 1, there is freedom in choosing the ’s, but if the minimization (4) is taken also with respect to normalized , the problem becomes nonlinear (and even nonconvex). Therefore, we suggest choosing the set of orthonormal eigenvectors given by the spectral decomposition of , with possibly some additional eigenvectors corresponding to , if has multiplicity greater than 1. Such a recommendation follows from the fact that tries to approximate the matrix in the Equivalence theorem, which depends on the eigenvectors corresponding to . Then, can be calculated by the linear program (4).

3 Example

The -optimality problem is an SDP problem (Vandenberghe and Boyd (1999)), which can be solved by the standard solvers like SeDuMi or MOSEK. However, the use of these methods is severely limited by available computer memory, because for a design space of size , they work with matrices. For instance, the SDP method can generally calculate -optimal designs for problems of sizes only up to on the computer specified in the next paragraph. Therefore, the proposed deletion method can be used to allow the solvers to deal with larger problems, as demonstrated in Example 1.

All calculations in this section are done in MATLAB on a computer with a 64-bit Windows 8 operating system running an Intel Core i5-4590S CPU processor at 3.00 GHz with 4 GB of RAM; the SDP problems are solved in MATLAB using SeDuMi through the CVX software. Throughout, , in Theorem 1 the eigenvectors are chosen as the orthonormal set from the spectral decomposition, is calculated by the linear program (4) and the ’s are calculated by minimizing ’s.

(a)
(b)
Figure 1: The deletion method applied for Example 1 based on (a) and (b). The gray area represents , i.e., the points that satisfy . The bright gray region denotes the points that were removed using the proposed method, the points in the dark gray region were not deleted. The black circles represent the values of the obtained -optimal design , their areas are proportional to the values of . In each case, the resulting design consists of three sequences of points: for , for and for . Many of the points in these sequences attain rather small values of (smaller than 0.02), but generally they cannot be omitted.
Example 1.

Consider the quadratic regression on the square discretized uniformly into design points. Therefore, the regressors are of the form with . Such models are typically used in the response surface methodology (see, e.g., Myers et al. (2016)). Moreover, suppose that only some combinations of and are allowed in the experiment, expressed by a constraint ; thus, only design points satisfying this constraint belong to . For this example, we randomly selected and . To our best knowledge, for the current model on the constrained design space analytical formulas on -optimal designs are not known, unlike for the model on the entire square, which is a rather simple design problem.

Out of the total original design points, in total of them satisfy the constraint, and the SDP method runs out of memory while trying to calculate an -optimal design on all points. However, using the deletion method, this can be remedied. We first calculate an -optimal design on a subset consisting of points chosen at random from . Based on , points (from the entire set ) are deleted, and on the remaining points an -optimal design can easily be calculated. Theorem 1 guarantees that this design is -optimal on the original -point design space. The deletion results as well as the final -optimal design are illustrated in Figure (a)a.

On the computer specified above, the calculation of is performed in around 5 minutes, the deletion method takes approximately 3.3 minutes and the -optimal design on the remaining points is calculated in less than 10 seconds.

Instead of selecting a random subset of , the removal of unnecessary design points can also be performed based on a design that is -optimal on a less dense grid. For instance, we may halve the density of the discretization by including in only ; then consists of the out of these design points that satisfy the constraint . Note that is indeed a subset of . Then, that is optimal on is obtained in less than 3 seconds, and based on , out of the total points can be deleted in around 3.5 minutes. An -optimal design on the remaining points (which is also -optimal on the entire ) can be calculated in less than 2 seconds. The results are illustrated in Figure (b)b.

Note that the deletion method based on the Elfving set takes more than two hours and it does not delete any design points in the current example, although this method generally deletes a nonzero number of points.

The amount of points removed by the proposed method naturally depends on the selected model. Although the deletion method generally allows for solving problems of greater size, the increase in size may be rather small. For instance, in settings identical to Example 1, only with added interaction term (i.e., ), the deletion method removes smaller number of points. The approach based on randomly selecting points removed points from the original -point for one such random selection. That is not enough to allow one to apply the SDP algorithm to the remaining points on the specified computer. However, by using the approach of utilizing the less dense discretization, similarly to Example 1, points can be removed. On the remaining design points the -optimal design that is also optimal on can be calculated in around 40 minutes. Hence, it seems that considering a slightly less dense discretization for discretized models to delete non-optimal design points may be an efficient approach.

Acknowledgements

This work was supported by the Slovak Scientific Grant Agency [grant VEGA 1/0521/16].

References

  • Atkinson et al. [2007] A. C. Atkinson, A. Donev, and R. Tobias. Optimum experimental designs, with SAS. Oxford University Press, New York, 2007.
  • Dette and Grigoriev [2014] H. Dette and Y. Grigoriev. E-optimal designs for second-order response surface models. The Annals of Statistics, 42:1635–1656, 2014.
  • Dette and Studden [1993] H. Dette and W. J. Studden. Geometry of E-optimality. The Annals of Statistics, 21:416–433, 1993.
  • Dette et al. [2006] H. Dette, B. Melas, and A. Pepelyshev. Local c- and E-optimal designs for exponential regression models. Annals of the Institute of Statistical Mathematics, 58:407–426, 2006.
  • Fedorov and Leonov [2014] V. V. Fedorov and S. L. Leonov. Optimal Design for Nonlinear Response Models. CRC Press, Boca Raton, 2014.
  • Harman [2003] R. Harman. A method how to delete points which do not support a D-optimal design. Tatra Mountains Mathematical Publications, 26:59–67, 2003.
  • Harman [2014] R. Harman. Multiplicative methods for computing D-optimal stratified designs of experiments. Journal of Statistical Planning and Inference, 146:82–94, 2014.
  • Harman and Pronzato [2007] R. Harman and L. Pronzato. Improvements on removing nonoptimal support points in D-optimum design algorithms. Statistics & Probability Letters, 77:90–94, 2007.
  • Harman and Trnovská [2009] R. Harman and M. Trnovská. Approximate D-optimal designs of experiments on the convex hull of a finite set of information matrices. Mathematica Slovaca, 59:693–704, 2009.
  • Mandal et al. [2015] A. Mandal, W. K. Wong, and Y. Yu. Algorithmic searches for optimal designs. In A. Dean, M. Morris, J. Stufken, and Bingham D., editors, Handbook of Design and Analysis of Experiments. Chapman & Hall/CRC, Boca Raton, 2015.
  • Myers et al. [2016] R. H. Myers, D. C. Montgomery, and C. M. Anderson-Cook. Response surface methodology: process and product optimization using designed experiments, volume 3. John Wiley & Sons, New Jersey, 2016.
  • Pázman [1986] A. Pázman. Foundation of Optimum Experimental Design. Reidel Publ., Dordrecht, 1986.
  • Pronzato [2003] L. Pronzato. Removing non-optimal support points in D-optimum design algorithms. Statistics and Probability Letters, 63:223–228, 2003.
  • Pronzato [2013] L. Pronzato. A delimitation of the support of optimal designs for Kiefer’s Phip-class of criteria. Statistics and Probability Letters, 83:2721–2728, 2013.
  • Pronzato and Pázman [2013] L. Pronzato and A. Pázman. Design of Experiments in Nonlinear Models. Springer, New York, 2013.
  • Pukelsheim [1993] F. Pukelsheim. Optimal design of experiments. Wiley, New York, 1993.
  • Pukelsheim and Studden [1993] F. Pukelsheim and W. J. Studden. E-optimal designs for polynomial regression. The Annals of Statistics, 21:402–415, 1993.
  • Vandenberghe and Boyd [1999] L. Vandenberghe and S Boyd. Applications of semidefinite programming. Applied Numerical Mathematics, 29:283–299, 1999.