## 1 Introduction

Hierarchical random effect models are used for different purposes. Common applications, e.g. in clinical research, are random effect meta analyses of several clinical trials using individual patient data (IPD), multi-center trials assuming random center effects or mixed or random effect models for repeated measurements (MMRM) for longitudinal data within subjects. Clinical trials or meta-analyses of clinical trials may involve several populations or subgroups of patients (e.g. defined by genetic biomarkers), where in some settings between-population variability may be modeled using a hierarchical approach. Apart from clinical trials or observational studies data on quality parameters of drugs arise from hierarchical settings with multiple layers, i.e. with multiple sources of variability according to the underlying manufacturing process, e.g. given by manufacturing sites, production batches and samples within batches. In general, the main focus is on population parameters related to the expected treatment effects or group differences *among* all units of an upper level (e.g. trials in IPD meta-analyses, centers in multi-center trials, patients in longitudinal trials, batches in quality control, etc.). Several authors considered optimal design for estimation of population parameters (expected values of random effects) in similar models, see, e.g. [Fedorov and Jones(2005)], [Fedorov and Leonov(2013)], [Schwabe and Schmelter(2008)] and [Lemme et al.(2015)].
However, prediction of the outcome in the *individual* units may also be of interest in several settings, as for treatment effects in single centers to assess qualification of individual clinics, manufacturing sites in manufacturing control, treatment effects in different subpopulations of patients. In these cases, the question arises, whether optimal designs for populations parameters can also be used for individual predictions of random effects, whether optimal designs for individual predictions differ from those for populations parameters and to which extent and which efficiency loss (or sample size increase) can be anticipated if another conventional design is chosen.

Therefore, we investigated optimal designs for random effects to be applied, e.g. in multi-center trials and compared them to a conventional balanced design with respect to treatment allocation.

The structure of the paper is as follows: In the second section a model of a multi-center trial is specified and the best linear unbiased predictions of the individual center parameters (intercepts and treatment effects) are derived. In Section 3 analytical results are presented to characterize optimal designs for prediction in multi-center trials. The results are illustrated by some numerical examples. The paper is concluded by a short discussion.

## 2 Model Specification

We consider a multicenter trial with different centers. In all of these centers individuals are allocated to two treatment groups. In the first group (denoted by ) the individuals receive an active treatment while in the second group (denoted by ) a placebo or a control treatment is applied.

Denote by and the intercept (mean response at placebo or control) and the effect of the active treatment (compared to placebo or control), respectively, in center , which both may vary across the centers. The response of an individual in center can be described as

(1) |

where is equal to , if the individual belongs to the treatment group, and is equal to for the control (or placebo) group and denotes the random variation in the response of the individuals. The individual variations

are assumed ot have zero mean and to be homoscedastic with common variance

.The centers are assumed to be similar and, hence, representatives of a larger entity of centers. Thus the center specific intercepts and treatment effects can be assumed as random with (unknown) expected values and characterizing the mean intercept and mean treatment effect across the centers and covariance structure for some positive definite dispersion matrix . All random effects and all individual variations are assumed to be uncorrelated.

For the sake of simplicity we further assume that the total number of individuals is the same for all centers () and that the allocation rate is constant across the centers, i. e. the number of individuals in the treatment group is the same for all centers. The design problem can then be formulated in terms of finding the optimal allocation rate for the treatment group.

Because of exchangeability of the individuals within each center we may sort them in the analysis regardless of randomization in such a way that the first individuals to be analyzed receive the active treatment and the remaining individuals are in the control group. Then the experimental settings in (1) can be specified by

and are independent of the center .

Hence, the multi-center model (1) can be identified as a particular case of the random coefficient regression model

(2) |

investigated by [Prus and Schwabe(2016)] when the regression functions and the center parameters are specified by and

, respectively. In general this model can be written in vector notation as

(3) |

where and are the -dimensional vectors of observations and individual variations at center , respectively, and is the within center design matrix which is equal across all centers.

For the present multicenter model the design matrix simplifies to

where and denote the -dimensional vectors with all entries equal to and , respectively.

According to [Prus and Schwabe(2016)] (see also [Fedorov and Jones(2005)]) in model (2) the best linear unbiased predictors (BLUP)

(4) |

of the random parameters are weighted combinations of the individual estimates based only on the observations in center

and the the best linear unbiased estimator (BLUE)

of the population parameter , where is the mean observational vector averaged across the centers..We additionally assume that the center intercepts and the center treatment effects are uncorrelated for all centers, i. e. , where and are the variance ratios of the intercepts and the treatment effects in relation to the observational variance of the individuals.

With the standard notations and for the mean response in the treatment (“”) and the control (“”) groups in center , and for the overall mean of the treatment and the control groups, respectively, the BLUPs for the center parameters of the random intercepts and the random treatment effects in model (1) can be written as weighted averages

(5) |

where and , and

(6) |

with weights and .

To measure the quality of a design we will use the mean squared error (MSE) matrix of the BLUP of the complete vector of all random parameters. The mean squared error (MSE) matrix for the BLUP can be computed by means of the following formula :

(7) | |||||

(see [Prus and Schwabe(2016)]), where denotes the identity matrix and is the symbol for the Kronecker product of matrices or vectors.

Further denote by the vector of treatment effects for all centers. Then and, hence,

(8) |

for the MSE matrix of the BLUP of . Using this and formula (7) we obtain the MSE matrix

(9) | |||||

## 3 Optimal Design

As individuals are interchangeable within treatment groups we may define an exact within center design

(10) |

by the allocation numbers and to the treatment and control group and , respectively.

For analytical purposes, we generalize this to the definition of an approximate design:

(11) |

where is the allocation rate to the treatment group and is the allocation rate to the control group. For finding an optimal design only the optimal allocation rate to the treatment group has to be determined.

For an approximate design the definition of the MSE matrix (9) of the BLUP is extended in a straightforward manner an can be rewritten (neglecting ) as :

(12) | |||||

in terms of the allocation rate . The approach of approximate designs seems in so far to be appropriate as the total number of individuals in each center should be sufficiently large. Otherwise optimal exact designs have to be obtained by adequate rounding of to a multiple of (see below).

For the assessment of the MSE matrix we focus on the -optimality criterion which averages the mean squared errors of the center treatment effects. More specifically, the -criterion is the trace of the MSE matrix of the prediction of the center treatment effects. For an approximate design we get

(13) |

for the criterion function in terms of the allocation rate . Because, in general, there is no explicit solution for the optimal allocation rate, which minimizes (13), we will give an insight in the qualitative behavior by some numerical example below.

It is worth-while mentioning that the criterion (13) is convex and, therefore, an optimal exact design may be obtained by choosing the best of two exact designs adjacent to an optimal approximate design.

### 3.1 Example.

For illustrative purposes we consider a numerical example with centers and individuals in each center. Figure 1 exhibits the behavior of the optimal allocation rate to the treatment group in dependence of the variance ratio of the treatment effects for some fixed values , , , and of the variance ratio of the intercept. For reasons of presentation we plot the optimal allocation rate against the rescaled variance ratio in the spirit of intra-class correlation in order to cover all possible values of the treatment effects variance by a finite interval (). Each value of the variance ratio of the intercepts is represented by one solid line. What can be seen from the picture is that for fixed values of the optimal allocation rate is equal to for and increases with increasing values of the variance ratio of the treatment effects. The different lines associated to the different values of appear in descending order which means that the optimal allocation rates decrease when the variance ratio of the intercepts gets larger.

The next figure (Figure 2) shows the behavior of the optimal allocation rate in dependence of the variance ratio of the intercepts for fixed values , , , and of the variance ratio of the treatment effects, where again the variance ratio is rescaled (). Also here it can be seen that the optimal allocation rate decreases with increasing values of and increases with increasing values of .

Finally, Figures 3 and 4 present the efficiency of the equal allocation rate which is optimal in the fixed effects model (). The efficiency for the -criterion (-efficiency) has been computed using the standard formula

(14) |

The efficiencies decrease with increasing values of the variance of the treatment effects and increase with increasing values of the variance of the intercepts.

Example cont.

For illustrative purposes we consider numerical examples with or centers and or individuals in each center. Figures 5 and 6 exhibit the behavior of the optimal allocation rate to the treatment group in dependence of the variance ratio for some fixed values (solid line), (dashed line), (dotted line), (dashed-dotted line) and (dashed line, long dashes) of the variance ratio of the intercept. For reasons of presentation we plot the optimal allocation rate against the rescaled variance ratio in the spirit of intraclass correlation in order to cover all possible values of variance ratio by a finite interval (). As we can observe on the graphics, the dependence on the variance ratio is more essential for and than for and .

Figures 7 and 8 presents the efficiency of the equal allocation rate which is optimal in the fixed effects model. The efficiency is more sensible with respect to the variance ratio in the case and than for and .

## 4 Discussion

As illustrated in the example, the larger the between-unit (between-trial) variability of the treatment effects the more the optimal weight deviates from equal allocation, especially if the variance of the units’ intercepts is small. An increasing heterogeneity in the treatment effect leads to a decreased precision of the design that is optimal for population parameters: A balanced design is far from optimal if the treatment effects vary strongly as compared to the residual error and more subjects should be recruited to the active (new) treatment in multi-center trials. Nevertheless, it appears reassuring to the clinical trial practitioner that the efficiency loss may be limited as in the example resulting in a total sample size increase of about 10 - 20 % in the considered scenarios if individual predictions are foreseen with a balanced allocation. Usually, between-unit variability of treatment effects are considered to be rather small, indicating that equal allocation may suffice. However, using the results given in the paper, specific settings with different expectations can be assessed properly, in order to make optimally use of a limited number of patients or sample units to predict random effects of individual units.

## References

- [Fedorov and Jones(2005)] Fedorov, V. and Jones, B. (2005). The design of multicentre trials. Statistical Methods in Medical Research, 14, 205–248.
- [Fedorov and Leonov(2013)] Fedorov, V. and Leonov, S. (2013). Optimal Design for Nonlinear Response Models. CRC Press, Boca Raton.
- [Lemme et al.(2015)] Lemme, F., van Breukelen, G. J. P., and Berger, M. P. F. (2015). Efficient treatment allocation in two-way nested designs. Statistical Methods in Medical Research, 24, 494–512.
- [Prus and Schwabe(2016)] Prus, M. and Schwabe, R. (2016). Optimal designs for the prediction of individual parameters in hierarchical models. Journal of the Royal Statistical Society: Series B, 78, 175–191.
- [Schwabe and Schmelter(2008)] Schwabe, R. and Schmelter, T. (2008). On optimal designs in random. Tatra Mountains Mathematical Publications., 39, 145–153.

Comments

There are no comments yet.