## 1 Introduction

The subject of this paper is random coefficients regression (RCR) models. These models have been initially defined in biosciences (see e. g. Henderson (1975)

) and are now popular in many other fields of statistical applications. Besides the estimation of population (fixed) parameters, the prediction of individual random effects in RCR models are often of prior interest. Locally optimal designs for the prediction have been discussed in

Prus and Schwabe (2016b) and Prus and Schwabe (2016a). However, these designs depend on the covariance matrix of random effects. Therefore, some robust criteria like minimax (or maximin), which minimize the largest value of the criterion or maximize the smallest efficiency with respect to the unknown variance parameters, are to be considered. For fixed effects models, such robust design criteria have been well discussed in the literature (see e. g.

Müller and Pázman (1998), Dette et al. (1995), Schwabe (1997)). For optimal designs in nonlinear models see e. g. by Pázman and Pronzato (2007), Pronzato and Walter (1988) and Fackle-Fornius et al. (2015).Here we focus on the minimax-criterion for the prediction in RCR models, which minimizes the "worst case" for the basic criterion with respect to the variance parameters. We choose the integrated mean squared error (IMSE) as the basic criterion. We consider particular linear and quadratic regression models in detail.

The structure of this paper is the following: The second part specifies the RCR models and presents the best linear unbiased prediction of the individual random parameters. The third part provides the minimax-optimal designs for the prediction. The paper will be concluded by a short discussion in the last part.

## 2 RCR Model

We consider the RCR models, in which observation of individual is given by the following formula:

(1) |

where is the number of observations per individual, is the number of individuals,

is a vector of known regression functions. The experimental settings

come from an experimental region . The observational errors are assumed to have zero mean and common variance . The individual parameters have unknown expected value (population mean) and known positive definite covariance matrix . All individual parameters and all observational errors are assumed to be uncorrelated.The best linear unbiased predictor for the individual parameter is given by

which is a weighted average of the individualized estimator based only on observations at individual

and the best linear unbiased estimator

for the population mean parameter. is the individual vector of observations, is the mean observational vector and is the design matrix, which is assumed to be of full column rank.The mean squared error matrix of the of the vector of all predictors of all individual parameters is given by the following formula (see e. g. Prus and Schwabe (2016b)):

(2) |

where

denotes the identity matrix,

is the vector of length with all entries equal to and denotes the Kronecker product.## 3 Optimal Designs

For this paper we define the exact designs as follows:

where are the distinct experimental settings (support points), , and are the corresponding numbers of replications. For analytical purposes we will focus on the approximate designs, which we define as

where and only the conditions and have to be satisfied (integer numbers of replications are not required). Further we will use the notation

for the standardized information matrix from the fixed effects model and for the adjusted dispersion matrix of the random effects. We assume the matrix to be non-singular. With this notation the definition of mean squared error matrix (2) can be extended for approximate designs to

when we neglect the constant term .

### 3.1 IMSE-criterion

In this work we focus on the integrated mean squared error (IMSE-) criterion. For the prediction of individual parameters we define the IMSE-criterion (see also Prus and Schwabe (2016b)) as the sum over all individuals

of the expected integrated squared distances of the predicted and the real response, and , with respect to a suitable measure on the experimental region , which is typically chosen to be uniform on with . For an approximate design the IMSE-criterion has the form

(3) |

where , which may be recognized as the information matrix for the weight distribution in the fixed effects model.

### 3.2 Minimax-criteria

In this section we consider optimal designs for the prediction in particular RCR models: straight line and quadratic regression. We define the minimax-criterion as the worst case of the IMSE-criterion with respect to the unknown variance parameters.

We additionally assume the diagonal covariance structure of random effects. Then IMSE-criterion (3) will increase with increasing values of variance parameters. However, if all these parameters will be large, the criterion function will tend to the IMSE-criterion in the fixed effects model (multiplied by the number of individuals ). Therefore, we fix some of the variances and consider the behavior of minimax-optimal designs in the resulting particular cases.

Note that for special RCR, where only the intercept is random, optimal designs for fixed effects models retain their optimality (see Prus and Schwabe (2016b)).

#### Straight line regression

We consider the linear regression model

(4) |

on the experimental regions with the diagonal covariance structure of random effects: , and a small intercept variance: . For the IMSE-criterion we choose the uniform weighting , which leads to . As proved in Prus (2015), ch. 5, IMSE-optimal designs for the prediction in model (4) are of the form

where denotes the optimal weight of observations at the support point . Then we obtain the following form of IMSE-criterion (3):

It is easy to see that this criterion increases with increasing values of the slope variance. The latter property allows us to define the minimax-criterion as follows:

which results in

and leads to the following optimal weight:

Figure 1 illustrates the behavior of the optimal design with respect to the number of individuals for all integer values in the interval . As we can see in Figure 1, the optimal weight increases with increasing number of individuals. Figure 2 presents the efficiency of the minimax-optimal design with respect to the locally optimal designs in dependence of the rescaled slope variance for fixed numbers of individuals , and . For all numbers of individuals the efficiency is high and increases with increasing slope variance.

#### Quadratic regression

We investigate the quadratic regression model

(5) |

on the standard symmetric design region with a diagonal covariance matrix of random effects: . For the IMSE-criterion we apply the uniform weighting .

Because of its complexity, the general form of the IMSE-criterion for the quadratic regression will not be presented here. As it was mentioned at the beginning of this section, the IMSE-criterion increases with increasing variances. Hence, we will fix some of the variances by small values and consider minimax-criteria for the resulting particular cases.

Case 1. and

If both the intercept and the slope variances are small, the worst case of the IMSE-criterion is given by its limiting value (). We define, therefore, the minimax-criterion in this case as

which leads to the optimal design

Figure 3 illustrates the behavior of the optimal weight with respect to the number of individuals .

Case 2. and

If both variances of the intercept and of the coefficient of the quadratic term are small, the minimax-criterion can be defined as

Case 3.

If only the variance of the coefficient of the quadratic term is small, we obtain the following worst case of the IMSE-criterion:

For both cases 2 and 3 we receive the following optimal weight of the observations at the support point :

which is described by Figure 4.

Case 4.

If only the slope variance is small, we determine the minimax-criterion as the limiting value

of the IMSE-criterion. The resulting optimal weight is given by the following formula:

Case 5.

For small intercept variance the minimax-criterion can be defined as

which leads to the minimax-optimal weight

The behaviors of the optimal designs in cases 4 and 5 are illustrated by Figures 5 and 6, respectively.

As we can see on the graphics, the optimal weights increase with increasing number of individuals in cases 1, 2, 3 and 5 and decrease in case 4. For cases 1 and 2 we consider the efficiency of the minimax-optimal designs with respect to the locally optimal designs in dependence of the rescaled variances and , respectively, for fixed numbers of individuals (Figures 7 and 8).

The efficiency turns out to be high and increasing with increasing variance parameters for both cases 1 and 2 and all values of the number of individuals (, , ).

## 4 Discussion

In this paper we have considered minimax-optimal designs for the IMSE-criterion for the prediction in particular RCR models: linear and quadratic regression. We have assumed the diagonal structure of the covariance matrix of random effects. In this case the IMSE-criterion is increasing with increasing values of all variance parameters. If all variances converge to infinity, the limiting criterion coincides with the IMSE-criterion in fixed effects models and, consequently, the optimal designs in fixed effects models retain their optimality for the prediction. If some of variances are small, the minimax-optimal designs in RCR depend on the number of individuals and differ from the optimal designs in fixed effects models. For some particular cases we have considered the efficiency of the minimax-optimal designs with respect to the locally optimal designs. The efficiency turns out to be high and increase with increasing variance parameters.

## Acknowledgment

This research has been supported by grant SCHW 531/16-1 of the German Research Foundation (DFG).

## References

- Dette et al. (1995) Dette, H., Heiligers, B., and J., S. W. (1995). Minimax designs in linear regression models. The Annals of Statistics, 23, 30–40.
- Fackle-Fornius et al. (2015) Fackle-Fornius, E., Miller, F., and Nyquist, H. (2015). Implementation of maximin efficient designs in dose-finding studies. Pharmaceutical Statistics, 14, 63–73.
- Henderson (1975) Henderson, C. R. (1975). Best linear unbiased estimation and prediction under a selection model. Biometrics, 31, 423–477.
- Müller and Pázman (1998) Müller, C., H. and Pázman, A. (1998). Applications of necessary and sufficient conditions for maximin efficient designs. Metrika, 48, 1–19.
- Pázman and Pronzato (2007) Pázman, A. and Pronzato, L. (2007). Quantile and probability-level criteria for nonlinear experimental design. In J. L. Fidalgo, J. M. Rodríguez-Díaz, and B. Torsney, editors, mODa 8 - Advances in Model-Oriented Design and Analysis, pages 157–164. Physica.
- Pronzato and Walter (1988) Pronzato, L. and Walter, E. (1988). Robust experimental design via maximin optimality. Mathematical Biosciences, 89, 161–176.
- Prus (2015) Prus, M. (2015). Optimal Designs for the Prediction in Hierarchical Random Coefficient Regression Models. Ph.D. thesis, Otto-von-Guericke University, Magdeburg.
- Prus and Schwabe (2016a) Prus, M. and Schwabe, R. (2016a). Interpolation and extrapolation in random coefficient regression models: Optimal design for prediction. In C. H. Müller, J. Kunert, and A. C. Atkinson, editors, mODa 11 - Advances in Model-Oriented Design and Analysis, pages 209–216. Springer.
- Prus and Schwabe (2016b) Prus, M. and Schwabe, R. (2016b). Optimal designs for the prediction of individual parameters in hierarchical models. Journal of the Royal Statistical Society: Series B, 78, 175–191.
- Schwabe (1997) Schwabe, R. (1997). Maximin efficient designs. Another view at D-optimality. Statistics and Probability Letters, 35, 109–114.

Comments

There are no comments yet.