1 Introduction
The subject of this work is random coefficients regression (RCR) models, which were initially introduced in biosciences and are nowdays popular in many fields of statistical application, for example in agricultural studies or medical research.
Linear criteria as well as a generalized version of the Dcriterion in RCR have been discussed in Prus and Schwabe (2016b)
. Some results for the particular case of the ccriterion, interpolation and extrapolation, are presented in
Prus and Schwabe (2016a).In this work we concentrate on the Kiefer’s criteria (see e. g. Fedorov and Leonov (2013)
, p. 54, for the fixed effects case), which are based on the eigenvalues of the information matrix in the random coefficient regression models, and the
G (global) criterion, which aims to minimize the maximal prediction mean squared error over the experimental region.For fixed effects models, the Ecriterion (see e. g Atkinson et al. (2007), ch. 10), which can be recognized as the particular Kiefer’s criterion for , has been considered in detail in Kiefer (1974). Kiefer and Wolfowitz (1960) have proved the equivalence of D and Gcriteria in models with homoscedastic errors. More recently, Wong (1995) has discussed the G
optimality in heteroscedastic models. For that models, it has been established that the
D and Gcriteria are in general not equivalent.Here, we present some results for the general form of Kiefer’s criteria in RCR models and prove for the Ecriterion for largest eigenvalue that the optimal designs for fixed effects models retain their optimality for the prediction. We consider the Gcriterion for ordinary linear regression with a diagonal covariance structure on specific experimental regions.
The paper has the following structure: In the second part the random coefficients regression models will be specified and the best linear unbiased prediction of individual random parameters will be presented. The third part provides analytical results for the designs, which are optimal for the prediction. The paper will be concluded by a short discussion in the forth part.
2 Model Specification and Prediction
In this paper we consider the random coefficients regression models, in which the th observation of individual is given by
(1) 
where is the number of observations per individual, is the number of individuals,
is a vector of known regression functions and
is an experimental region. The observational errorsare assumed to have zero mean and common variance
. The individual parameters have unknown population mean and known positive definite covariance matrix . All individual parameters and all observational errors are assumed to be uncorrelated.The best linear unbiased predictor of individual parameter is given by
(2) 
where and for the individual vector of observations , the mean observational vector and the design matrix , which is assumed to be of full column rank.
The mean squared error matrix of the of the vector of all predictors of all individual parameters is given by
(3) 
where
is identity matrix,
is the vector of length with all elements equal to and denotes the Kronecker product.3 Optimal Designs
We define here exact designs in the following way:
(4) 
where are the distinct experimental settings with the numbers of replications . Approximate designs are defined as
(5) 
where and only the conditions and have to be satisfied (integer numbers of replications are not required). Further we will use the notation
(6) 
for the standardized information matrix from the fixed effects model and for the adjusted dispersion matrix for the random effects. We assume the matrix to be nonsingular. With this notation the definition of the mean squared error (3) can be extended for approximate designs to
(7) 
when we neglect the constant term .
3.1 Kiefer’s criterion
We define the Kiefer’s criterion for the prediction in RCR models for all values of as the following function of the trace of the MSE matrix (3) of the prediction:
(8) 
Then we extend the definition of the criterion for approximate designs and obtain the following result.
Theorem 1.
The Kiefer’s criterion for the prediction of individual parameters is for approximate designs given by
(9) 
Proof.
As proved in Prus (2015), ch. 5, the eigenvalues of are the eigenvalues of with multiplicity and the eigenvalues of with multiplicity . Then we obtain
∎
Note that the Kiefer’s criterion (9) can be recognized as a weighted sum of the Kiefer’s criteria in fixed effects and Bayesian models (if we neglect the power ) . The weight of the Bayesian part increases with increasing number of individuals . For models with only one individual optimal designs in fixed effects models are optimal for the prediction.
Particular cases of the Kiefer’s criterion (for and ), D and Acriteria, have been considered in detail by Prus and Schwabe (2016b). The Ecriterion, which can also be recognized as the limiting Kiefer’s criterion for , will be discussed in the next section.
3.2 Ecriterion
We define the Ecriterion (eigenvalue criterion) for the prediction as the largest eigenvalue of the MSE matrix (3):
(10) 
For approximate designs we obtain the following result.
Theorem 2.
The Ecriterion for the prediction of individual parameters is for approximate designs given by
(11) 
where denotes the largest eigenvalue of .
Proof.
It is easy to see that in Loewner ordering. Therefore, the smallest eigenvalue of is smaller than the smallest eigenvalue of (see e. g. Fedorov and Leonov (2013), p. 11) and, consequently, the largest eigenvalue of is larger than the largest eigenvalue of . Then making use of the proof of Theorem 1 we obtain the result (11). ∎
Corollary 1.
Eoptimal designs in the fixed effects model are Eoptimal for the prediction of individual parameters in the random coefficient regression model.
3.3 Gcriterion
For the prediction in RCR models we define the Gcriterion (global criterion) as the maximal sum of individual expected squared differences of the predicted and real response across all individuals with respect to all possible observational settings:
(12) 
We receive the following criterion for the approximate designs.
Theorem 3.
The Gcriterion for the prediction of individual parameters is for approximate designs given by
(13) 
Proof.
∎
The following property can be easily verified for the criterion (13) .
Lemma 1.
Note that the Gcriterion (13) is not differentiable and, therefore, no optimality condition in the sense of an equivalence theorem (see Kiefer and Wolfowitz (1960)) is straightforward to formulate. Therefore, we will consider in detail the following particular model .
Particular model: straight line regression
We consider the linear regression model
(14) 
for two different experimental regions , , and , .
For this model the function
(15) 
which can be also recognized as the sensitivity function of the Dcriterion in RCR models (see Prus and Schwabe (2016b)), is a parabola with a positive leading term (with respect to ). Therefore, achieves its maxima at the ends of the intervals. Then the Gcriterion (13) simplifies to
or
for all nonsingular designs on or on , respectively.
Further we will make use of the following simple lemmas.
Lemma 2.
Let
be an approximate design in model (14) on . Then it holds for the approximate design
where , that in Loewner ordering.
Lemma 3.
Let
be an approximate design in model (14) on . Then it holds for the approximate design
where , that in Loewner ordering.
Then it follows directly from Lemmas 1, 2 and 3 that at least one of optimal designs in model (14) on or is of the form
or
respectively. Then only the optimal weights and have to be determined.
Further we will additionally assume the diagonal structure of the covariance matrix of the random effects: . Then it is easy to verify that and increase and and decrease with increasing values of and , respectively. Consequently, the optimal designs are solutions of the equations
(16) 
and
(17) 
Note that if equation (16) or (17) has no solutions, the resulting optimal designs on or , respectively, will lead to a singular information matrix.
Then equation (16) may be represented for , , in form
(18) 
For condition (17) we obtain
(19) 
It is easy to see that the only solution of (19) is given by the optimal weight .
According to Prus and Schwabe (2016b), the optimality condition for the criterion is given by
(20) 
for all , and equality in (20) for all support points. This condition coincides with (18) for and (19) for , which implies for both design regions the equivalence of the D and Gcriteria in the linear regression model (14) with the diagonal covariance structure.
4 Discussion
We have discussed Kiefer’s the criteria and the global (G) criterion in RCR models. The obtained general form of the Kiefer’s criterion can be recognized as the weighted sum of the Kiefer’s criterion in fixed effects and Bayesian models, where the weight of the Baysian part increases with increasing number of individuals. For the Ecriterion (particular Kiefer’s criterion with ), it was proved that optimal designs in fixed effects models retain their optimality for the prediction in RCR models. The Gcriterion cannot be factorized in the fixed effects and Bayesian parts. This criterion has been discussed in detail for ordinary linear regression on specific experimental regions, , , and , . For this special case, the equivalence of D and Gcriteria has been established. However, the equivalence of these two criteria does not hold in general for RCR models.
Acknowledgments
This research has been supported by grant SCHW 531/161 of the German Research Foundation (DFG). The author thanks to Radoslav Harman and Rainer Schwabe for fruitful discussions.
References
 Atkinson et al. (2007) Atkinson, A. C., Donev, A. N., and Tobias, R. D. (2007). Optimum Experimental Designs, with SAS. Oxford University Press, Oxford.
 Fedorov and Leonov (2013) Fedorov, V. and Leonov, S. (2013). Optimal Design for Nonlinear Response Models. CRC Press, Boca Raton.
 Kiefer (1974) Kiefer, J. (1974). General equivalence theory for optimum designs (approximate theory). Annals of Statistics, 2, 849–879.
 Kiefer and Wolfowitz (1960) Kiefer, J. and Wolfowitz, J. (1960). The equivalence of two extremum problems. Canadian Journal of Mathematics, 12, 363–366.
 Prus (2015) Prus, M. (2015). Optimal Designs for the Prediction in Hierarchical Random Coefficient Regression Models. Ph.D. thesis, OttovonGuericke University, Magdeburg.
 Prus and Schwabe (2016a) Prus, M. and Schwabe, R. (2016a). Interpolation and extrapolation in random coefficient regression models: Optimal design for prediction. mODa 11  Advances in ModelOriented Design and Analysis, pages 209–216.
 Prus and Schwabe (2016b) Prus, M. and Schwabe, R. (2016b). Optimal designs for the prediction of individual parameters in hierarchical models. Journal of the Royal Statistical Society: Series B, 78, 175–191.

Wong (1995)
Wong, W. K. (1995).
On the equivalence of d and goptimal designs in heteroscedastic
models.
Statistics and Probability Letters
, 25, 317–321.
Comments
There are no comments yet.