Optimal designs for model averaging in non-nested models

04/02/2019
by   Kira Alhorn, et al.
0

In this paper we construct optimal designs for frequentist model averaging estimation. We derive the asymptotic distribution of the model averaging estimate with fixed weights in the case where the competing models are non-nested and none of these models is correctly specified. A Bayesian optimal design minimizes an expectation of the asymptotic mean squared error of the model averaging estimate calculated with respect to a suitable prior distribution. We demonstrate that Bayesian optimal designs can improve the accuracy of model averaging substantially. Moreover, the derived designs also improve the accuracy of estimation in a model selected by model selection and model averaging estimates with random weights.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

07/13/2018

Optimal designs for frequentist model averaging

We consider the problem of designing experiments for the estimation of a...
10/27/2019

On the asymptotic distribution of model averaging based on information criterion

Smoothed AIC (S-AIC) and Smoothed BIC (S-BIC) are very widely used in mo...
02/13/2019

Anytime Tail Averaging

Tail averaging consists in averaging the last examples in a stream. Comm...
02/24/2022

From Model Selection to Model Averaging: A Comparison for Nested Linear Models

Model selection (MS) and model averaging (MA) are two popular approaches...
02/20/2021

Provably Strict Generalisation Benefit for Equivariant Models

It is widely believed that engineering a model to be invariant/equivaria...
10/21/2020

Minimum Mean-Squared-Error Autocorrelation Processing in Coprime Arrays

Coprime arrays enable Direction-of-Arrival (DoA) estimation of an increa...
05/14/2014

Credal Model Averaging for classification: representing prior ignorance and expert opinions

Bayesian model averaging (BMA) is the state of the art approach for over...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

There exists an enormous amount of literature on selecting an adequate model from a set of candidate models for statistical analysis. Numerous model selection criteria have been developed for this purpose. These procedures are widely used in practice and have the advantage of delivering a single model from a class of competing models, which makes them very attractive for practitioners. Exemplarily, we mention Akaike’s information criterion (AIC), the Bayesian information criterion (BIC) and its extensions, Mallow’s , the generalized cross-validation and the minimum description length (see the monographs of Burnham and Anderson (2002), Konishi and Kitagawa (2008) and Claeskens and Hjort (2008) for more details). Different criteria have different properties, such as consistency, efficiency and parsimony (used in the sense of Claeskens and Hjort (2008, Chapter 4)). Overall there seems to be no universally optimal model selection criterion and different criteria might be preferable in different situations depending on the particular application.

On the other hand, there exists a well known post-selection problem in this approach because model selection introduces an additional variance that is often ignored in statistical inference after model selection (see

Pötscher (1991)

for one of the first contributions discussing this issue). This post-selection problem is inter alia attributable to the fact, that estimates after model selection behave like mixtures of potential estimates. For example, ignoring the model selection step (and thus the additional variability) may lead to confidence intervals with coverage probability smaller than the nominal value, see for example Chapter 7 in

Claeskens and Hjort (2008) for a mathematical treatment of this phenomenon.

An alternative to model selection is model averaging, where estimates of a target parameter are smoothed across several models, rather than restricting inference on a single selected model. This approach has been widely discussed in the Bayesian literature, where it is known as “Bayesian model averaging” (see the tutorial of Hoeting et al. (1999)

among many others). For Bayesian model averaging prior probabilities have to be specified. This might not always be possible and therefore

Hjort and Claeskens (2003) also proposed a “frequentist model averaging”, where smoothing across several models is commonly based on information criteria. Kapetanios et al. (2008) demonstrated that the frequentist approach is a worthwhile alternative to Bayesian model averaging. Stock and Watson (2003) observed that averaging predictions usually performs better than forecasting in a single model. Hong and Preston (2012) substantiate these observations with theoretical findings for Bayesian model averaging if the competing models are “sufficiently close”. Further results pointing in this direction can be found in Raftery and Zheng (2003), Schorning et al. (2016) and Buatois et al. (2018).

Independently of this discussion there exists a large amount of research how to optimally design experiments under model uncertainty (see Box and Hill (1967); Stigler (1971); Atkinson and Fedorov (1975) for early contributions). This work is motivated by the fact that an optimal design can improve the efficiency of the statistical analysis substantially if the postulated model assumptions are correct, but may be inefficient if the model is misspecified. Many authors suggested to choose the design for model discrimination such that the power of a test between competing regression models is maximized (see Ucinski and Bogacka (2005); López-Fidalgo et al. (2007); Tommasi and López-Fidalgo (2010) or Dette et al. (2015) for some more recent references). Other authors proposed to minimize an average of optimality criteria from different models to obtain an efficient design for all models under consideration (see Dette (1990), Zen and Tsai (2002); Tommasi (2009) among many others).

Although model selection or averaging are commonly used tools for statistical inference under model uncertainty most of the literature on designing experiments under model uncertainty does not address the specific aspects of these methods directly. Optimal designs are usually constructed to maximize the power of a test for discriminating between competing models or to minimize a functional of the asymptotic variance of estimates in the different models. To the best of our knowledge Alhorn et al. (2019) is the first contribution, which addresses the specific challenges of designing experiments for model selection or model averaging. These authors constructed optimal designs minimizing the asymptotic mean squared error of the model averaging estimate and showed that optimal designs can yield a reduction of the mean squared error up to . Moreover, they also showed that these designs improve the performance of estimates in models chosen by model selection criteria. However, their theory relies heavily on the assumption of nested models embedded in a framework of local alternatives as developed by Hjort and Claeskens (2003).

The goal of the present contribution is the construction of optimal designs for model averaging in cases where the competing models are not nested (note that in this case local alternatives cannot be formulated). Moreover, in contrast to most of the literature, we also consider the situation where all competing models mispecify the data underlying truth. In order to derive an optimality criterion, which can be used for the determination of optimal designs in this context, we further develop the approach of Hjort and Claeskens (2003) and derive an asymptotic theory for model averaging estimates for classes of competing models which are non-nested. Optimal designs are then constructed minimizing the asymptotic mean squared error of the model averaging estimate and it is demonstrated that these designs yield substantially more precise model averaging estimates. Moreover, these designs also improve the performance of estimates after model selection. Our work also contributes to the discussion of the superiority of model averaging over model selection. Most of the results presented in literature indicate that model averaging has some advantages over model selection in general. We demonstrate that conclusions of this type depend sensitively on the class of models under consideration. In particular we observe some advantages of estimation after model selection if the competing models are of rather different shape. Nevertheless, the optimal designs developed in this paper improve both estimation methods, where the improvement can be substantial in many cases.

The remaining part of this paper is organized as follows. The pros and cons of model averaging and model selection are briefly discussed in Section 2 where we introduce the basic methodology and investigate the impact of similarity of the candidate models on the performance of the different estimates. In Section 3 we develop asymptotic theory for model averaging estimation in the case where the models are non-nested and all competing models might misspecify the underlying truth. Based on these results we derive a criterion for the determination of optimal designs. In Section 4 we study the performance of these designs by means of a simulation study. Finally, technical assumptions and proofs are given Section 6.

2 Model averaging versus model selection

In this section we introduce the basic terminology and also illustrate in a regression framework that the superiority of model averaging about estimation in a model chosen by model selection depends sensitively on the class of competing models.

2.1 Basic terminology

We consider data obtained at different experimental conditions, say chosen in a design space . At each experimental condition one observes responses, say , and the total sample size is . We also assume that the responses

are realizations of random variables of the form

(2.1)

where the regression function is a differentiable function with respect to the parameter and the random errors

are independent normally distributed with mean 0 and common variance

. Furthermore, the index in corresponds to different models (with parameters ) and we assume that there are competing regression functions under consideration.

Having different candidate models (differing by the regression functions ) a classical approach for estimating a parameter of interest, say , is to calculate an information criterion for each model under consideration and estimate this parameter in the model optimizing this information criterion. For this purpose, we denote the density of the normal distribution corresponding to a regression model (2.1) by with parameter and identify the different models by their densities (note that in the situation considered in this sections these only differ in the mean). Using the observations estimate we calculate in each model the maximum likelihood estimate

(2.2)

of the parameter , where

(2.3)

is the log-likelihood in candidate model (). Note that we do not assume that the true data generating density is included in the set of candidate models . Each estimate of the parameter yields an estimate

(2.4)

for the quantity of interest, where is the target parameter in model .

For example, regression models of the type (2.1) are frequently used in dose finding studies (see MacDougall (2006) or Bretz et al. (2008)). In this case a typical target function

of interest is the “quantile” defined by

(2.5)

The value defined in (2.5) is well-known as , that is, the effective dose at which of the maximum effect in the design space is achieved.

We now briefly discuss the principle of model selection and averaging to estimate the target parameter . For model selection we choose the model from , which maximizes Akaike’s information criterion (AIC)

(2.6)

where is the number of parameters in model (see Claeskens and Hjort (2008), Chapter 2). The target parameter is finally estimated by . Obviously, other model selection schemes, such as the Bayesian or focussed information criterion can be used here as well, but we restrict ourselves to the AIC for the sake of a transparent presentation.

Roughly speaking, model averaging is a weighted average of the individual estimates in the competing models. It might be viewed from a Bayesian (see for example Wassermann (2000)) or a frequentist point of view (see for example Claeskens and Hjort (2008)) resulting in different choices of model averaging weights. We will focus here on non-Bayesian methods. More explicitly, assigning nonnegative weights to the candidate models with , the model averaging estimate for is given by

(2.7)

Frequently used weights are uniform weights (see, for example Stock and Watson (2004), Kapetanios et al. (2008)). More elaborate model averaging weights can be chosen depending on the data. For example, Claeskens and Hjort (2008) define smooth AIC-weights as

(2.8)

Alternative data dependent weights can be constructed using other information criteria or model selection criteria. There also exists a vast amount of literature on determining optimal data dependent weights such that the resulting mean squared error of the model averaging estimate is minimal (see Hjort and Claeskens (2003), Hansen (2007) or Liang et al. (2011) among many others). For the sake of brevity we concentrate on smooth AIC-weights here, but similar observations as presented in this paper can also be made for other data dependent weights.

2.2 The class of competing models matters

In this section we illustrate the influence of the candidate set on the properties of model averaging estimation and estimation after model selection by means of a brief simulation study. For this purpose we consider four regression models of the form (2.1), which are commonly used in dose-response modeling and specified in Table 1 with corresponding parameters.

Model Mean function Parameter specifications
Log-Linear ()
Emax ()
Exponential ()
Quadratic ()
Table 1: Models and parameters used for the simulation study.

Here we adapt the setting of Pinheiro et al. (2006) who model the dose-response relationship of an anti-anxiety drug, where the dose of the drug may vary in the interval . In particular, we have different dose levels and patients are allocated to each dose level most equally, where the total sample size is . We consider the problem of estimating the , as defined in (2.5).

To investigate the particular differences between both estimation methods we choose two different sets of competing models from Table 1. The first set

(2.9)

contains the log-linear, the Emax and the quadratic model, while the second set

(2.10)

contains the log-linear, the Emax and the exponential model. The set serves as a prototype set of “similar” models while the set contains models of more “different” shape. This is illustrated in Figure 1. In the left panel we show the quadratic model (for the parameters specified in Table 1) and the best approximations of this function by a log-linear model () and an Emax model (

) with respect to the Kullback-Leibler divergence

(2.11)

In this case, all models have a very similar shape and we obtain for the ED the values , and for the log-linear (), Emax () and quadratic model (). Similarly the right panel shows the exponential model (, solid line) and its corresponding best approximations by the log-linear model () and the Emax model (). Here we observe larger differences between the models in the candidate set and we obtain for the ED the values , and for the models , and , respectively.

(a)
(b)
Figure 1: Left panel: quadratic model (solid line) and its best approximations by the log-linear (dashed line) and the Emax model (dotted line) with respect to the Kullback-Leibler divergence (2.11). Right panel: exponential model (solid line) and its best approximations by the log-linear (dashed line) and the Emax model (dotted line).

All results presented in this paper are based on simulations runs generating in each run observations of the form

(2.12)

where the errors are independent centered normal distributed random variables with and is one of the models (with parameters specified in Table 1). The parameter is estimated by model averaging with uniform, smooth AIC weights in (2.8) and estimation after model selection by the AIC criterion.

In Table 2 and 3 we show the simulated mean squared errors of the model averaging estimates with uniform weights (left column), smooth AIC-weights (2.8) (middle column) and estimation after model selection (right column). Here, different rows correspond to different models. The numbers printed in bold face indicate the estimation method with the smallest mean squared error.

2.2.1 Models of similar shape

model sample size uniform weights smooth AIC-weights model selection
437.045 498.323 758.978
223.291 218.99 285.062
111.973 82.713 78.371
286.638 329.904 515.32
189.785 203.796 251.836
62.792 64.854 66.54
276.037 361.101 669.873
190.662 244.558 391.443
92.653 109.852 139.859
1503.903 1372.31 1381.033
1109.622 856.484 729.912
864.163 398.144 255.604
Table 2: Simulated mean squared error of different estimates of the . The set of candidate models is . Left column: model averaging with uniform weights; middle column: model averaging with smooth AIC-weights; right column: estimation after model selection.

We will first discuss the results for the set of similar models in (2.9) (see Table 2). If the data generating model is an element of the set of candidate models, model averaging with uniform weights performs very well. Model averaging with smooth AIC-weights yields an about - larger mean squared error (except for two cases, where it performs better than model averaging with uniform weights). On the other hand the mean squared error of estimation after model selection is substantially larger than that of model averaging, if the sample size is small. This is a consequence of the additional variability associated with data-dependent weights. For example, if the sample size is and the data generating model is given by , the mean squared errors of the model averaging estimates with uniform and smooth AIC-weights and the estimate after model selection are given by , and , respectively. The corresponding variances are given by , and , respectively. For the squared bias the order is exactly the opposite, that is , , , but the differences are not so large. This means that the bias can be reduced by using random weights, because these put more weight on the “correct” model. As a consequence, compared to model averaging with uniform weights the performance of model averaging with smooth AIC-weights and the estimate after model selection improves with increasing sample size. Nevertheless, if the “true” model is an element of the candidate set and the functions in this set have a similar shape, model averaging performs better than estimation after model selection. In particular, model averaging with (fixed) uniform weights yield very reasonable results. These observation coincide with the findings of Schorning et al. (2016) and Buatois et al. (2018) who compared model averaging and model selection in the context of dose finding studies (see also Chen et al. (2018)

for similar results for the AIC in the context of ordered probit and nested logit models).

The situation changes if none of the candidate models from the set is the “true” model. This is illustrated in the lower part of Table 2, where we show results if the exponential model is used for generating the data. We observe that model averaging with uniform weights is outperformed by model averaging with smooth AIC-weights. Moreover, the estimate after model selection is even better, if the sample size increases. These observations can be explained by the different shapes of the regression functions, as illustrated in Figure 1. By a suitable choice of parameters the quadratic model can adapt to the shape of the exponential model, whereas the log-linear and the Emax model still have different forms (see right panel of Figure 1 for the best approximations that are possible using the log-linear and the Emax model). Thus, incorporating these models in a model averaging estimate yields a large bias, that can be reduced substantially by data dependent weights or by model selection. For example, if the squared bias of the model averaging estimate with uniform weights is , whereas the model averaging estimate with smooth AIC-weights and the estimate after model selection show a squared bias of and , respectively.

2.2.2 Models of more different shape

estimation method
model sample size uniform weights smooth AIC-weights model selection
834.295 553.427 776.311
712.404 340.254 353.707
524.518 48.587 38.591
640.706 505.054 669.285
517.963 267.967 286.272
394.536 65.805 53.424
1076.154 1141.476 1427.441
871.362 766.140 802.763
802.196 480.641 399.839
288.091 486.501 852.377
208.628 298.315 419.651
162.689 138.331 142.673
Table 3: Simulated mean squared error of different estimates of the . The set of candidate models is . Left column: model averaging with uniform weights; middle column: model averaging with smooth AIC-weights; right column: estimation after model selection.

We will now consider the candidate set in (2.10), which serves as an example of more different models and includes the log-linear, the Emax and the exponential model. The simulated mean squared errors of the three estimates of the ED are given in Table 3. The upper part of the table corresponds to cases, where data is generated from a model in the candidate set used for model selection and averaging. In contrast to Section 2.2.1 we observe only one scenario, where model averaging with uniform weights gives the smallest mean squared error (but in this case model averaging with smooth AIC-weights yields very similar results). If the sample size increases model averaging with smooth AIC-weights and estimation after model selection yield a substantially smaller mean squared error. An explanation of this observation consists in the fact that for a candidate set containing models with a rather different shape model averaging with uniform weights produces a large bias. On the other hand model averaging with smooth AIC-weights and estimation after model selection adapt to the data and put more weight on the “true” model, in particular if the sample size is large. As estimation after model selection has a larger variance and the variance is decreasing with increasing sample size, the bias is dominating the mean squared error for large sample sizes and thus estimation in the model selected by the AIC is more efficient for large sample sizes.

Finally, if the data is generated according to the quadratic model model averaging with uniform weights has the smallest mean squared error if the sample is and . In this case estimation in the model selected by the AIC performs much worse (due to its large variance). However, the differences become smaller with increasing sample size. In particular for model averaging with smooth AIC-weights and estimation after model selection show a substantially better performance than model averaging with uniform weights.

The numerical study in Section 2.2.1 and 2.2.2 can be summarized as follows. The results observed in the literature have to be partially relativized. The superiority of model averaging with uniform weights can only be observed for classes of “similar” competing models and a not too large signal to noise ratio. On the other hand if the models in the candidate set are of rather different structure, model averaging with data dependent weights (such as smooth AIC-weights) or estimation after model selection may show a better performance. For these reasons we will investigate optimal/efficient designs for all three estimation methods in the following sections. We will demonstrate that a careful design of experiments can improve the accuracy of these estimates substantially.

3 Asymptotic properties and optimal design

In this section we will derive the asymptotic properties of model averaging estimates with fixed weights in the case where the competing models are not nested. The results can be used for (at least) two purposes. On the one hand they provide some understanding of the empirical findings in Section 2, where we observed, that for increasing sample size the mean squared error of model averaging estimates is dominated by its bias. On the other hand, we will use these results to develop an asymptotic representation of the mean squared error of the model averaging estimate, which can be used in the construction of optimal designs.

3.1 Model averaging for non-nested models

Hjort and Claeskens (2003)

provide an asymptotic distribution of frequentist model averaging estimates making use of local alternatives which require the true data generating process to lie inside a wide parametric model. All candidate models are sub-models of this wide model and the deviations in the parameters are restricted to be of order

. Using this assumption results in convenient approximations for the mean squared error as variance and bias are both of order . However, in the discussion of this paper Raftery and Zheng (2003) pose the question if the framework of local alternatives is realistic. More importantly, frequentist model averaging is also often used for non-nested models (see for example Verrier et al. (2014)). In this section we will develop asymptotic theory for model averaging estimation in non-nested models. In particular, we do not assume that the “true” model is among the candidate models used in the model averaging estimate.

As we will apply our results for the construction of efficient designs for model averaging estimation we use the common notation of this field. To be precise, let

denote a response variable and let

denote a vector of explanatory variables defined on a given compact design space

. Suppose that has a density with respect to a dominating measure. For estimating a quantity of interest, say , from the distribution we use different parametric candidate models with densities

(3.1)

where denotes the parameter in the th model, which varies in a compact parameter space, say . Note, that in general we do not assume that the density is contained in the set of candidate models in (3.1) and that the regression model (2.1) investigated in Section 2 is a special case of this general notation.

We assume that different experimental conditions, say , can be chosen in a design space and that at each experimental condition one can observe responses, say (thus the total sample size is ), which are realizations of independent identically distributed random variables with density . For example, if coincides with then the density of the random variables is given by (). To measure efficiency and to compare different experimental designs we will use asymptotic arguments and consider the case for . As common in optimal design theory we collect this information in the form

(3.2)

which is called approximate design in the following discussion (see, for example, Kiefer (1974)). For an approximate design of the form (3.2) and total sample size a rounding procedure is applied to obtain integers taken at each ( from the not necessarily integer valued quantities (see, for example Pukelsheim (2006), Chapter 12).

The asymptotic properties of the maximum likelihood estimate (calculated under the assumption that is the correct density) is derived under certain assumptions of regularity (see the Assumptions (A1)-(A6) in Section 6). In particular, we assume that the functions are twice continuously differentiable with respect to and that several expectations of derivatives of the log-densities exist. For a given approximate design and a candidate density we denote by

(3.3)

the Kullback-Leibler divergence between the models and and assume that

(3.4)

is unique for each . For notational simplicity we will omit the dependency of the minimum on the density , whenever it is clear from the context and denote the minimizer by . We also assume that the matrices

(3.5)
(3.6)

exist, where expectations are taken with respect to the true distribution .

Under standard assumptions White (1982) shows the existence of a measurable maximum likelihood estimate for all candidate models which is strongly consistent for the (unique) minimizer in (3.4). Moreover, the estimate is also asymptotically normal distributed, that is

(3.7)

where we assume the existence of the inverse matrices, denotes convergence in distribution and we use the notations

(3.8)

(). The following result gives the asymptotic distribution of model averaging estimates of the form (2.7).

Theorem 3.1.

If Assumptions (A1) - (A7) in Section 6.1 are satisfied, then the model averaging estimate (2.7) satisfies

(3.9)

where the asymptotic variance is given by

(3.10)

Theorem 3.1 shows, that the model averaging estimate is biased for the true target parameter , unless we have . Hence we aim to minimize the asymptotic mean squared error of the model averaging estimate. Note, that the bias does not depend on the sample size, while the variance is of order .

3.2 Optimal designs for model averaging of non-nested models

Alhorn et al. (2019) determined optimal designs for model averaging minimizing the asymptotic mean squared error of the estimate calculated in a class of nested models under local alternatives and demonstrated that optimal designs lead to substantially more precise model averaging estimates than commonly used designs in dose finding studies. With the results of Section 3.1 we can develop a more general concept of design of experiments for model averaging estimation, which is applicable for non-nested models and in situations, where the “true” model is not contained in the set of candidate models used for model averaging.

To be precise, we consider the criterion

(3.11)

where is the target parameter in the “true” model with density and and are defined in (3.10) and (3.4), respectively. Note that this criterion depends on the “true” distribution via and the best approximating parameters .

For estimating the target parameter via a model averaging estimate of the form (2.7) most precisely a “good” design yields small values of the criterion function . Therefore, for a given finite set of candidate models and weights a design is called locally optimal design for model averaging estimation of the parameter , if it minimizes the function in (3.11) in the class of all approximate designs on . Here the term “locally” refers to the seminal paper of Chernoff (1953) on optimal designs for nonlinear regression models, because the optimality criterion still depends the unkown density .

A general approach to address this uncertainty problem is a Bayesian approach based on a class of models for the density . To be precise, let denote a finite set of potential densities and let

denote a probability distribution on

, then we call a design Bayesian optimal design for model averaging estimation of the parameter if it minimizes the function

(3.12)

In general, the set can be constructed independently of the set of candidate models. However, if there is not much prior information available one can construct a class of potential models from the candidate set as follows. We denote the candidate set of models in (3.1) by . Each of these models depends on a unknown parameter and we denote by a set of possible parameter values for the model . Now let denote a prior distribution on and for each let denote a prior distribution on . Finally, we define and a prior

(3.13)

then the criterion (3.12) can be rewritten as

(3.14)

In the finite sample study of the following section the set and the set (for any ) are finite, which results in a finite set .

We conclude noting that the optimality criteria proposed in this section have been derived for model averaging estimates with fixed weights. The asymptotic theory presented here cannot be easily adapted to estimates using data-dependent (random) weights (as considered in Section 2), because it is difficult to get an explicit expression for the asymptotic distribution, which is not normal in general. Nevertheless, we will demonstrate in the following section that designs minimizing the mean squared error of model averaging estimates with fixed weights will also yield a substantial improvement in model averaging estimation with smooth AIC-weights and in estimation after model selection.

4 Bayesian optimal designs for model averaging

We will demonstrate by means of a simulation study that the performance of all considered estimates can be improved substantially by the choice of an appropriate design. For this purpose we consider the same situation as in Section 2, that is regression models of the from (2.1) with centred normal distributed errors. We also consider the two different candidate sets and defined in (2.9) (log-linear, Emax and quadratic model) and (2.10) (log-linear, Emax and exponential model), respectively.

Using the criterion introduced in Section 3 we now determine a Bayesian optimal design for model averaging estimation of the with uniform weights from observations. We require a prior distribution for the unknown density , and we use a distribution of the form (3.13) for this purpose. To be precise, let denote the density of a normal distribution with mean and variance (), where the mean functions are given in Table 1. As the criterion (3.14) does not depend on the intercept , these are not varied and taken from Table 1. For each of the other parameters we use three different values: the values specified in Table 1 and a larger and smaller value of this parameter.

(4.1)

4.1 Models of similar shape

We will first consider the candidate set consisting of the log-linear, the Emax and the quadratic model. For the definition of the prior distribution (3.13) in the criterion (3.14

) we consider a uniform distribution

on the set and a uniform prior on each set in (4.1) (). The Bayesian optimal design for model averaging estimation of the minimizing the criterion (3.14) has been calculated numerically using the COBYLA algorithm (see Powell (1994)) and is given by

(4.2)

We will compare this design with the design

(4.3)

proposed in Pinheiro et al. (2006) for a a similar setting (this design has also been used in Section 2) and the locally optimal design for estimation of the in the log-linear model with parameter given by

(4.4)

(see 22). Results for the locally optimal designs for the estimation of the in the Emax and exponential model are similar and omitted for the sake of brevity. We use the same setup as in Section 2. For the sake of brevity we only report results for the sample size . Other results are available from the authors.

model design uniform weights smooth AIC-weights model selection
(4.2) 164.338 161.239 182.069
(4.3) 223.291 218.99 285.062
(4.4) 185.251 184.77 340.698
(4.2) 122.665 135.969 168.577
(4.3) 189.785 203.796 251.836
(4.4) 501.814 501.394 1162.654
(4.2) 174.535 212.746 331.552
(4.3) 190.662 244.558 391.443
(4.4) 404.716 427.548 1396.051
(4.2) 1073.859 804.385 630.458
(4.3) 1109.622 856.484 729.912
(4.4) 3184.11 3413.566 4102.964
Table 4: Simulated mean squared errors of different estimates of the for different experimental designs. The set of candidate models is . Left column: model averaging estimate with uniform weights; middle column: model averaging estimate with smooth AIC-weights; right column: estimate after model selection.

The corresponding results are given in Table 4, where we use the models and from Table 1 to generate the data (note that the model is not in the candidate set used for model averaging and model selection). The different columns represent the different estimation methods (left column: model averaging with uniform weights; middle column: smooth AIC-weights, right column: model selection). The numbers printed in boldface indicate the minimal mean squared error for each estimation method obtained from the different experimental designs. First, we consider the situation, where the data generating model is contained in the set of candidate models corresponding to the upper part of the table. We observe that in this case model averaging yields better results than estimation after model selection and this superiority is independent of the design under consideration. Compared to the designs and the Bayesian optimal design for model averaging with uniform weights improves the efficiency of all estimation techniques. For example, when data is generated using the log-linear model the mean squared error of the model averaging estimate with uniform weights is reduced by and , when the optimal design is used instead of the designs or , respectively. This improvement is remarkable as the design is locally optimal for estimating the ED in the model and data is generated from this model. In other cases the improvement is even more visible. For example, if data is generated by the model the improvement in model averaging estimation with uniform weights is and compared to the designs and . Moreover, although the designs are constructed for model averaging with uniform weights they also yield substantially more accurate model averaging estimates with smooth AIC-weights and a more precise estimate after model selection. For example, if the data is generated from model the mean squared error is reduced by and by for estimation with smooth AIC-weights and by and for estimation after model selection, respectively. Similar results can be observed for the models and .

Next, we consider the case where the data is generated from the exponential model , which is not contained in the candidate set . The efficiency of all three estimates improves substantially by the use of the Bayesian optimal design . Interestingly, the improvement is less pronounced for model averaging with uniform weights ( and compared to the designs and , respectively) than for smooth AIC-weights ( and ) and estimation after model selection ( and ).

Summarizing, our numerical results show that the Bayesian optimal design for model averaging estimation of the yields a substantial improvement of the mean squared error of the model averaging estimate with uniform weights (-), smooth AIC-weights (-) and the estimate after model selection (-) for all four models under consideration.

4.2 Models of different shape

We will now consider the second candidate set consisting of the log-linear () the Emax () and the exponential model (). For the definition of the prior distribution (3.13) in the criterion (3.14) we use a uniform distribution on the set and a uniform prior on each set () in (4.1). For this choice the Bayesian optimal design for model averaging estimation of the is given by

(4.5)

and has (in comparison to the design in Section 4.2) five instead of four support points.

estimation method
model design uniform weights smooth AIC-weights model selection
(4.5) 724.682 349.615 344.991
(4.3) 712.404 340.254 353.707
(4.4) 798.473 444.457 459.335
(4.5) 510.354 239.594 206.281
(4.3) 517.963 267.967 286.272
(4.4) 1155.793 1025.269 1835.355
(4.5) 834.284 665.07 646.381
(4.3) 871.362 766.140 802.763
(4.4) 1526.230 1842.633 2721.415
(4.5) 148.559 307.103 388.958
(4.3) 208.628 298.315 419.651
(4.4) 522.652 610.198 1907.066
Table 5: Simulated mean squared errors of different estimates of the for different experimental designs. The set of candidate models is . Left column: model averaging estimate with uniform weights; middle column: model averaging estimate with smooth AIC-weights; right column: estimate after model selection.

The simulated mean squared errors of the three estimates under different designs are given in Table 5. We observe again that compared to the designs and the Bayesian optimal design improves most estimation techniques substantially. However, if model averaging with uniform weights is used and data is generated by model , the mean squared error of the model averaging estimate from the optimal design is larger than the mean squared error obtained by the design . For model averaging with smooth AIC-weights this difference is . Overall, the reported results demonstrate a substantial improvement in efficiency by the use of the Bayesian optimal design independently of the estimation method. If the Bayesian optimal design is used, estimation after model selection yields the smallest mean squared error if the data is generated from a model of the candidate set . On the other hand, if data is generated from model model averaging with equal weights shows the best performance.

Summarizing, our numerical results show that compared to the designs and the design reduces the mean squared error of model averaging estimates with uniform weights up to . Furthermore, for smooth AIC-weights and estimation after model selection the reduction can be even larger and is up to and , respectively. These improvements hold also for the quadratic model , which is not contained in the candidate set used in the definition of the optimality criterion.

5 Conclusions

In this paper we derived the asymptotic distribution of the frequentist model averaging estimate with fixed weights from a class of not necessarily nested models. We neither assume that this class contains the “true” model. We use these results to determine Bayesian optimal designs for model averaging, which improve the estimation accuracy of the estimate substantially. Although these designs are constructed for model averaging with fixed weights, they also yield a substantial improvement of accuracy for model averaging with data dependent weights and for estimation after model selection.

We also demonstrate that the superiority of model averaging against estimation after model selection depends sensitively on the class of competing models, which is used in the model averaging procedure. If the competing models are similar (which means that a given model from the class can be well approximated by all other models), then model averaging should be preferred. Otherwise, we observe advantages for estimation after model selection, in particular, if the signal to noise ratio is small.

Although, the new designs show a very good performance for estimation after model selection and for model averaging with data dependent weights, it is of interest to develop optimal designs, which address the specific issues of data dependent weights in the estimates. This is a very challenging problem for future research as there is no simple expression of the asymptotic mean squared error of these estimates. A first approach to solve this problem is an adaptive one and a further interesting and very challenging question of future research is to improve the accuracy of adaptive designs.


Acknowledgements This work has also been supported in part by the Collaborative Research Center “Statistical modeling of nonlinear dynamic processes” (SFB 823, Teilprojekt C2, T1) of the German Research Foundation (DFG).

1.15

References

  • K. Alhorn, K. Schorning, and H. Dette (2019) Optimal designs for frequentist model averaging. Biometrika to appear. Cited by: §1, §3.2.
  • A. C. Atkinson and V. V. Fedorov (1975) The design of experiments for discriminating between two rival models. Biometrika 62, pp. 57–70. Cited by: §1.
  • G. E. P. Box and W. J. Hill (1967) Discrimination among mechanistic models. Technometrics 9 (1), pp. 57–71. Cited by: §1.
  • F. Bretz, J. Hsu, and J. Pinheiro (2008) Dose finding – a challenge in statistics. Biometrical Journal 50 (4), pp. 480–504. External Links: ISSN 0323-3847, Link, Document Cited by: §2.1.
  • S. Buatois, S. Ueckert, N. Frey, S. Retout, and F. Mentré (2018) Comparison of model averaging and model selection in dose finding trials analyzed by nonlinear mixed effect models. The AAPS journal 20, pp. 56. Cited by: §1, §2.2.1.
  • K. P. Burnham and D. R. Anderson (2002) Model selection and multimodel inference: a practical information-theoretic approach (2nd ed.). Springer-Verlag, New York. Cited by: §1.
  • L. Chen, A. T. K. Wan, G. Tso, and X. Zhang (2018) A model averaging approach for the ordered probit and nested logit models with applications. Journal of Applied Statistics 45 (16), pp. 3012–3052. External Links: Document, Link, https://doi.org/10.1080/02664763.2018.1450367 Cited by: §2.2.1.
  • H. Chernoff (1953) Locally optimal designs for estimating parameters. Annals of Mathematical Statistics 24, pp. 586–602. Cited by: §3.2.
  • G. Claeskens and N. L. Hjort (2008) Model selection and model averaging. Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press. External Links: Document Cited by: §1, §1, §2.1, §2.1.
  • H. Dette (1990) A generalization of - and -optimal designs in polynomial regression.. The Annals of Statistics 18, pp. 1784–1805. Cited by: §1.
  • H. Dette, V. B. Melas, and R. Guchenko (2015) Bayesian -optimal discriminating designs. The Annals of Statistics 43 (5), pp. 1959–1985. External Links: Document, Link Cited by: §1.
  • B. E. Hansen (2007) Least squares model averaging. Econometrica 75 (4), pp. 1175–1189. External Links: ISSN 0012-9682, Link, Document Cited by: §2.1.
  • N. L. Hjort and G. Claeskens (2003) Frequentist Model Average Estimators. Journal of the American Statistical Association 98 (464), pp. 879–899. Note: doi: 10.1198/016214503000000828 External Links: ISSN 0162-1459, Link, Document Cited by: §1, §1, §1, §2.1, §3.1.
  • J. A. Hoeting, D. Madigan, A. E. Raftery, and C. T. Volinsky (1999) Bayesian model averaging: a tutorial (with comments by m. clyde, david draper and e. i. george, and a rejoinder by the authors). Statist. Sci. 14 (4), pp. 382–417. External Links: Document, Link Cited by: §1.
  • H. Hong and B. Preston (2012) Bayesian averaging, prediction and nonnested model selection. Journal of Econometrics 167 (2), pp. 358 – 369. Note: Fourth Symposium on Econometric Theory and Applications (SETA) External Links: ISSN 0304-4076, Document, Link Cited by: §1.
  • G. Kapetanios, V. Labhard, and S. Price (2008) Forecasting using bayesian and information-theoretic model averaging. Journal of Business & Economic Statistics 26 (1), pp. 33–41. External Links: Document Cited by: §1, §2.1.
  • J. Kiefer (1974) General Equivalence Theory for Optimum Designs (Approximate Theory). The Annals of Statistics 2 (5), pp. 849–879 (EN). External Links: ISSN 0090-5364, 2168-8966, Link, Document, MathReview Entry Cited by: §3.1.
  • S. Konishi and G. Kitagawa (2008) Information criteria and statistical modeling. John Wiley & Sons, New York. Cited by: §1.
  • H. Liang, G. Zou, A. T. K. Wan, and X. Zhang (2011) Optimal Weight Choice for Frequentist Model Average Estimators. Journal of the American Statistical Association 106 (495), pp. 1053–1066. Note: doi: 10.1198/jasa.2011.tm09478 External Links: ISSN 0162-1459, Link, Document Cited by: §2.1.
  • J. López-Fidalgo, C. Tommasi, and P. C. Trandafir (2007) An optimal experimental design criterion for discriminating between non-normal models. Journal of the Royal Statistical Society, Series B 69, pp. 231–242. Cited by: §1.
  • J. MacDougall (2006) Analysis of Dose-Response Studies - Model. In Dose Finding in Drug Development, N. Ting (Ed.), pp. 127–145. Cited by: §2.1.
  • [22] Optimal designs for the emax, log-linear and exponential models. 97. External Links: ISSN 0006-3444, Link, Document Cited by: §4.1.
  • J. Pinheiro, B. Bornkamp, and F. Bretz (2006) Design and analysis of dose-finding studies combining multiple comparisons and modeling procedures. Journal of Biopharmaceutical Statistics 16, pp. 639–656. External Links: Document Cited by: §2.2, §4.1.
  • B. M. Pötscher (1991) Effects of model selection on inference. Econometric Theory 7 (2), pp. 163–185. External Links: ISSN 02664666, 14694360, Link Cited by: §1.
  • M. J. D. Powell (1994)

    A direct search optimization method that models the objective and constraint functions by linear interpolation

    .
    In Advances in Optimization and Numerical Analysis, J. Hennart and S. Gomez (Eds.), pp. 51–67 (en). External Links: ISBN 9789401583305, 9401583307 Cited by: §4.1.
  • F. Pukelsheim (2006) Optimal Design of Experiments. Classics in Applied Mathematics, Society for Industrial and Applied Mathematics. Note: DOI: 10.1137/1.9780898719109doi:10.1137/1.9780898719109 External Links: ISBN 978-0-89871-604-7, Link Cited by: §3.1.
  • A. Raftery and Y. Zheng (2003) Discussion: performance of bayesian model averaging. Journal of the American Statistical Association 98, pp. 931–938. Cited by: §1, §3.1.
  • K. Schorning, B. Bornkamp, F. Bretz, and H. Dette (2016) Model selection versus model averaging in dose finding studies. Statistics in Medicine 35 (22), pp. 4021–4040 (en). External Links: ISSN 1097-0258, Link, Document Cited by: §1, §2.2.1.
  • S. Stigler (1971) Optimal experimental design for polynomial regression.. Journal of the American Statistical Association 66, pp. 311–318. Cited by: §1.
  • J. H. Stock and M. W. Watson (2003) Forecasting output and inflation: the role of asset prices. Journal of Economic Literature 41 (3), pp. 788–829. External Links: Document, Link Cited by: §1.
  • J. H. Stock and M. W. Watson (2004) Combination forecasts of output growth in a seven-country data set. Journal of Forecasting 23 (6), pp. 405–430. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1002/for.928 Cited by: §2.1.
  • C. Tommasi and J. López-Fidalgo (2010) Bayesian optimum designs for discriminating between models with any distribution. Computational Statistics & Data Analysis 54 (1), pp. 143–150. Cited by: §1.
  • C. Tommasi (2009) Optimal designs for both model discrimination and parameter estimation. Journal of Statistical Planning and Inference 139, pp. 4123–4132. Cited by: §1.
  • D. Ucinski and B. Bogacka (2005) -optimum designs for discrimination between two multiresponse dynamic models. Journal of the Royal Statistical Society, Series B 67, pp. 3–18. Cited by: §1.
  • A. W. van der Vaart (1998) Asymptotic statistics. Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, Cambridge. External Links: Document Cited by: §6.2.
  • D. Verrier, S. Sivapregassam, and A. Solente (2014) Dose-finding studies, mcp–mod, model selection, and model averaging: two applications in the real world. Clinical Trials 11, pp. 476–484. External Links: Document Cited by: §3.1.
  • L. Wassermann (2000) Bayesian model selection and model averaging. Journal of Mathematical Psychology 44, pp. 92–107. Cited by: §2.1.
  • H. White (1982) Maximum likelihood estimation of misspecified models. Econometrica 50 (1), pp. 1–25. External Links: Document, Link Cited by: §3.1, §6.1, §6.2.
  • M.-M. Zen and M.-H. Tsai (2002) Some criterion-robust optimal designs for the dual problem of model discrimination and parameter estimation.. Sankhya: The Indian Journal of Statistics 64, pp. 322–338. Cited by: §1.

6 Technical assumptions and proof of Theorem 3.1

6.1 Assumptions

Following White (1982) we assume:

  • The random variables are independent. Furthermore, have a common distribution function with a measurable density with respect to a dominating measure .

  • The distribution function of each candidate model has a measurable density with respect to (for all ) that is continuous in .

  • For all the expectation exists (where expectation is taken with respect to ) and for each candidate model the function is dominated by a function that is integrable with respect to and does not depend on . Furthermore the Kullback-Leibler divergence (3.3) has a unique minimum

    (A.1)

    and is an interior point of .

  • For all the function is a measurable function for all and continuously differentiable with respect to for all .

  • The entries of the (matrix valued) functions

    are dominated by integrable functions with respect to for all and .

  • The matrices and in (3.5) and (3.6) are nonsingular.

  • The functions are once continuously differentiable.

6.2 Proof of Theorem 3.1

By equation (A.2) in White (1982) we have

(A.2)

where denotes convergence in probability (note that the matrix

is nonsingular by assumption). An application of the multivariate central limit theorem now leads to

(A.3)

where is defined in (3.6). Combining (A.2) and (A.3) we obtain the weak convergence of the vector , that is

(A.4)

where is a block matrix with entries

(A.5)

and the vector is given by

(A.6)

Next, we define for the parameter vector the projection by and the vector

(A.7)

with derivative

(A.8)

An application of the Delta method (see for example van der Vaart (1998, Chapter 3)) now shows that