External/Historical control information plays an increasingly important role at the design and analysis stage of a new clinical trial. For clinical trial sponsors, such information can lead to a more efficient study in terms of shorter duration and reduced cost. From the ethical perspective, it is possible that the number of patients assigned to the control arm can be lowered without compromising the overall statistical power. Bayesian methods are naturally suited for leveraging external control by utilizing the prior information derived therefrom. There are two popular classes of methods proposed: power prior approaches IbrahimEtal_2015 and meta-analytical-predictive (MAP) prior approaches NeuenschwanderEtal_2010. The power prior approach assumes an identical parameter (e.g., mean response or rate) among external and current data, and discount the former with a discounting factor . In contrast, such parameters arise from a common distribution in MAP prior via the exchangeability assumption, in which between-trial heterogeneity is controlled by (hyper-)parameters. A comparative review of statistical methods has been conducted by Viele and colleagues VieleEtal_2014.
It is well-acknowledged that utilizing external information possesses the risk of erroneous conclusion and bias when historical and current data are heterogeneous, a phenomenon also known as prior-data conflict. This caveat has arguably been the most important cause of reluctance to consider incorporating external information in trials. Therefore, a critical consideration is the robustness of the statistical models in the presence of prior-data conflict. A popular two-step procedure is test-then-pool. This approach first assesses the congruence between historical and current data via a hypothesis test of equality, and only pools the data if the null hypothesis is not rejected. On the other hand, the original MAP prior approachNeuenschwanderEtal_2010 belongs to a class of dynamic borrowing approaches VieleEtal_2014. As the name suggests, such approaches determine the extent of borrowing dynamically based on the congruence between historical and current data. Further, the robust MAP (rMAP) prior SchmidliEtal_2014 was proposed to be a more robust extension of the MAP prior. The rMAP prior introduces a vague prior component and a mixture weight into the MAP prior framework. In particular, is pre-specified based on the anticipated likelihood of prior-data conflict. The rMAP prior approach regulates the amount of borrowing through the mixture weight . When is closer to 1, rMAP prior is more dominated by the robust prior which is less informative, but more robust in the presence of increasing prior-data conflict; or vice versa when is closer to 0.
Another desirable feature of a robust borrowing method is the ability to adjust for the amount of borrowing in a data-dependent and objective manner. In other words, the extent of borrowing is determined by the congruence between historical and current data, as opposed to a pre-specified parameter value such as in rMAP prior. This is because it is generally difficult to predict the likelihood of prior-data conflict at the trial design stage when no current data is available. With such methodology, the amount of uncertainty can be reduced in protocol development, which may facilitate the acceptance of external control by both sponsors and regulators. Recent developments in this direction include some extensions of the power prior paradigm GravestockHeld_2017, NikolakopoulosEtal_2018, BennettEtal_2021, a Bayesian semiparametric MAP prior HupfEtal_2021_MAP_DP and an empirical Bayes MAP prior LiEtal_2016. In particular, the Bayesian semiparametric MAP prior approach uses the Dirichlet process prior to adaptively learn the relationship between historical and current data. The empirical Bayes MAP prior discounts or amplifies the impact of historical data based on a parameter determined by the the congruence between historical and current data. Most aforementioned data-dependent borrowing methods focused on a particular type of endpoint such as binary or normal. While their respective frameworks may be applicable to other types of endpoints, the implementations of such extensions may not always be trivial.
In this research, we propose a novel empirical Bayes robust MAP (EB-rMAP) prior to adaptively leverage external/historical data. As an MAP-prior-based method, EB-rMAP prior is ideal for handling multiple historical data sources, which could be challenging in empirical Bayes power prior methods NikolakopoulosEtal_2018. Built on the Box’s prior predictive p-value Box_1980, the EB-rMAP prior framework balances between model parsimony and flexibility by introducing only one additional tuning parameter. The computation can be conducted through existing software packages and therefore is highly efficient. The unified framework can be seamlessly applied to most popular types of endpoint, including binary, normal and time-to-event (TTE).
In Section 2, we firstly briefly review the MAP and rMAP priors and then introduce the EB-rMAP prior framework, with special considerations for the TTE outcome. We conduct simulation studies for binary, normal and TTE endpoints in Section 3. The TTE data from Roychoudhury and Neuenschwander RoychoudhuryNeuenschwander_2020 are re-analyzed with EB-rMAP prior in Section 4. Some future research topics of interest are discussed in the Concluding Remarks section.
2.1 Original, Robust and Empirical Bayes MAP Priors
We use the binary endpoint for illustration purposes. Denote and the data and parameter of interest for the current control arm respectively. In this case,
is the log-odd of response rateand is the number of responders. Let be the control data from . A hierarchical model is formed in original MAP prior NeuenschwanderEtal_2010.
(Sampling models of data) For , where is the sample size. The sampling models are trial-specific.
(Exchangeability) A common distribution is assumed for log-odds: , where .
(Hyper-priors) The mean parameter usually follows a non-informative prior. On the other hand, the exchangeability parameter
measures the between-trial heterogeneity and thus regulates the extent of borrowing. Common choices of priors are inverse-gamma distribution forand half-normal/t families for .
The original MAP prior, , derived from the hierarchical model is conditional on historical data only. It can be used at the design stage of the current trial, when current data is yet observed. Further denote and
the random variable and the observed current data, respectively. When
becomes available, following Bayes’ theorem, the corresponding posterior distribution ofgiven current and historical data, , can be decomposed into two components
where is the binomial likelihood of .
As a dynamic borrowing approach, MAP prior offers some level of robustness as the distribution of is often heavy-tailed NeuenschwanderEtal_2010. Schmidli and colleagues SchmidliEtal_2014 proposed the robust MAP prior that can improve its operating characteristics in the presence of prior-data conflict. The idea is to add a vague prior component to to hedge the situation in which considerable heterogeneity is observed between current and historical data:
where is a vague prior that leads to little or no borrowing and is a pre-specified weight for the vague prior. The idea of robustification is conceptually straightforward. The challenge, however, is that the MAP prior is usually from an unknown distribution, which means it is difficult to directly sample from in Bayesian computation. With an MCMC sample from , the authors chose to replace with its approximation that is based on the results by Dalal and HallDalalHall_1983
. It was stated that any parametric prior can be satisfactorily approximated by a mixture of conjugate priors. With binomial datawhere is the binomial probability of interest, can be approximated by a weighted mixture of beta distributions, as beta distribution is conjugate to binomial distribution:
where . The number of components
is up to the number of historical data, and is usually chosen to be the smallest number that produces an adequate approximation based on a certain criterion, e.g. by Kullback-Leibler divergence or AIC/BIC. Other parameters
are estimated by EM algorithm. The vague priortakes the same form with the components in . For example, with binary data, can be the standard uniform prior or Jefferys prior . Formally, we have
With rMAP prior (2), the posterior distribution is also in the form of a weighted mixture of conjugate priors, of which both the weights and component-wise parameters are updated analytically.
The difficulty to sample from the unknown distribution of can be circumvented by working with the mixture of conjugate priors (2). Yet another practical challenge in implementing the robust MAP prior is to properly specify the mixture weight . The general principle is to set close to 0 if the historical data is believed to be “similar with” current data, or vice versa. However, ascertaining the congruence between historical and current data is far from easy at the designing stage. In most cases, multiple values of are experimented in simulations. The amount of scenarios grows considerably when interim analyses are involved as various ’s might be considered at each interim. Therefore, methods that objectively adjust the extent of borrowing based on observed current data is desirable to boost the acceptance for the notion of historical data borrowing.
For the binomial endpoint, Li and colleagues LiEtal_2016 proposed an empirical Bayes MAP (EB-MAP) prior based on the mixture (1) and introduced an additional parameter :
The additional parameter
ensures that the mean of each Beta component remains unchanged, but its variance is altered: when, the variance becomes smaller which results in more borrowing, or vice versa. The value of is determined by maximizing the marginal likelihood of current data once it becomes available. In particular, the marginal likelihood is
where is the probability mass function of Beta-Binomial() distribution. Convergence issues may arise as approaches either 0 or and the authors proposed mitigating procedures to stabilize the numerical optimization.
2.2 A Novel Empirical Bayes Robust MAP Prior
We hereby propose a novel empirical Bayes robust MAP prior framework that allows borrowing historical data in an adaptive manner. The foundation of our empirical Bayes method is the prior predictive p value (). Box Box_1980 discussed how the prior predictive distribution can be used to gauge the compatibility of the data and prior information. In recent research, the has been used for evaluating prior-data conflict and for choosing the optimal discounting parameterGravestockHeld_2017, NikolakopoulosEtal_2018, BennettEtal_2021. We utilize the two-sided that is also used in Nikolakopoulos et al.NikolakopoulosEtal_2018. Let be a statistic of . The two-sided prior predictive p-value in its most general form is defined as
where is the predictive distribution of given historical data . The corresponding predictive distribution of is
In our case, is the robust MAP prior , the is therefore a function of mixture weight in the rMAP prior.
With binomial data, we set to the identity function. The predictive prior p-value is then
The optimal mixture weight in EB-rMAP prior can be defined as the smallest value among all ’s that lead to a reasonably large . Since a small indicates strong prior-data conflict, we recommend setting the threshold to a large value in principal. This reflects the fact that borrowing is considered only if there is weak evidence for prior-data conflict. We denote this threshold and it is the only tuning parameter in EB-rMAP prior. The sample size of current data may also need to be taken into consideration. More discussions on this will be presented in the next section. On the other hand, if never exceeds , is then set to 1 which implies that the vague prior is used. This happens when observed current data is very different from the historical data. Specifically, we have
The crucial step in EB-rMAP approach is the calculation of , which requires ascertaining the prior predictive distribution of current data corresponding to the robust MAP prior (2). The density of the prior predictive distribution for observations is
As discussed previously, is in the form of a mixture of Beta distributions. Following Fubini’s theorem, is a mixture of Beta-Binomial densities, of which each component corresponds to individual Beta prior and the binomial likelihood . This distribution can be calculated in RBesT package Weber_2021_RBesT. The probability is then computed based on the distribution function of the mixture distribution. Lastly, is obtained according to (3).
The EB-rMAP method differs from the EB-MAP LiEtal_2016 in several aspects. First, EB-rMAP is built on robust MAP prior SchmidliEtal_2014, while EB-MAP modifies the original MAP prior NeuenschwanderEtal_2010. Secondly, the optimization procedures are very different albeit both methods are empirical Bayes in nature. The parameter being optimized in EB-rMAP method is the mixture weight bounded within . In contrast, there is no upper limit for its counterpart in EB-MAP. Therefore, the optimization in our method could be more computationally tractable such that no additional rule is needed to ensure convergence. One can create a fine grid of , evaluate the at each value and determine following (4). Finally, the EB-rMAP method provides a unified framework that applies to most commonly encountered endpoints as listed in Table 1. In particular, the important time-to-event (TTE) endpoint can be modeled as count data under (piecewise) exponential model so that EB-rMAP prior is applicable. We elaborate the EB-rMAP prior for TTE endpoint in the next subsection.
|Type of Endpoint||Likelihood||Prior||Prior Predictive|
|Continuous||Normal (known SD)||Normal||Normal|
|Under (piecewise) exponential model|
2.3 EB-rMAP Prior with Time-To-Event Endpoint
Compared to continuous and binary endpoints, historical data borrowing with TTE endpoint has received less attention until recently RoychoudhuryNeuenschwander_2020, SmithEtal_2020
. Many Bayesian models for TTE data assume that the time-to-event follows an exponential distribution which has a constant hazard rate. This assumption may be overly simplistic in real applications, and therefore a more flexible approach is the piecewise exponential (PWE) model. The model partitions the follow-up periodinto mutually exclusive intervals, and the interval-specific hazard rate is assumed constant in each interval. PWE model reduces to the exponential model when . The historical TTE data in th interval are summarized in two quantities: number of events and total at-risk time (exposure) .
The EB-rMAP approach is applicable to TTE data, assuming an underlying PWE model. Let index the historical trials. Per the PWE model, the number of events in each interval
has a Poisson distribution
With a log link function, the corresponding generalized linear model (Poisson regression model in this case) is
, is termed the offset for the model. The original MAP prior assumes that the log-hazard rates arise from a common normal distribution:
The mean log-hazard rate generally has a weakly-informative normal prior, and is the exchangeability parameter that regulates the amount of borrowing.
Given the Poisson likelihood for ’s, the MAP prior for may be approximated by a weighted mixture of Gamma priors. To form the robust MAP prior, the vague prior could be a Gamma prior with an effective sample size of 1. Note that in the case of TTE endpoint, the effective sample size refers to the number of events, as opposed to the total number of subjects with binary and normal endpoints. Denote and the number of events and total exposure in interval of current trial, respectively. Since the Gamma-Poisson conjugate model is supported in RBesT package, it means that the EB-rMAP weight can be determined efficiently using the same procedure described in the previous subsection with current data. If in the PWE model, EB-rMAP prior is implemented for each time interval.
3 Simulation Studies
3.1 Normal Endpoint with Known Standard Deviation
Under the single arm setting, we evaluate the performance of EB-rMAP in simulations. The historical data to derive the original MAP prior are from five historical trials in moderate to severe Crohn’s disease HueberEtal_2012, available in crohn dataset of RBesT
package. The sample sizes ranged from 20 to 328, and the continuous endpoint was the change from baseline in Crohn’s Disease Activity Index (CDAI) at week 6. An improvement in outcome corresponds to a negative change from baseline. We assume the common standard deviation of the endpoint. A random effect meta-analysis yields a point estimate of -46.8 for the mean response. The historical means are assumed to follow a normal distribution . To derive the original MAP prior, we posit a weakly-informative prior for as data would be sufficiently informative for it. The exchangeability parameter has a half-normal prior . This configuration results in a fairly informative MAP prior with an effective sample size (ESS, by the definition of Neuenschwander et al. NeuenschwanderEtal_2020_ESS) of 31.9. The vague component to construct the robust MAP prior is which has an ESS of 1.
We first investigate the behavior of EB-rMAP weight and how it is impacted by the current data sample size and the tuning parameter . Figure 1 shows the values of against different values of observed mean response in current data. In the left panel, the curves correspond to different current sample sizes respectively, while fixing all other specifications. Similarly, in the right panel, we only vary ’s (0.8, 0.85 and 0.9). The meta-analysis point estimate of historical data is marked by the vertical dashed line. There is generally a window around the historical mean response within which the EB-rMAP weight is lower than 1. The weight tends to 0 which indicates the informative MAP prior is essentially used, when the observed current and historical mean responses are close. The method puts more weights on the vague prior as the two mean responses gets further apart. The further the prior-data conflict, the closer approaches 1. These are very favorable behaviors because responds properly to prior-data conflict, or the lack thereof. Fixing all other aspects, the window is wider with a smaller due to a less stringent requirement of agreement. On the other hand, although the window tends to be wider with a smaller , its impact is fairly limited in this configuration. We have seen having larger impact in other scenarios.
With the original MAP and vague priors, we design a current single arm study with sample size . We vary the true current mean response between -55 and -35. With each simulated current mean response , the study is deemed successful if the decision rule is met:
We compare our EB-rMAP approach with robust MAP priors with fixed respectively. Recall that the scenario of is essentially the original MAP prior, while the vague prior is used with . The tuning parameter in EB-rMAP is set to 0.9. The operating characteristics for evaluation are probability-of-success (PoS)Chuang_2006, absolute bias and mean square error of the posterior median estimator for .
Simulation results are presented in Figure 2 based on 5,000 iterations at each value of true current mean response. The PoS is well-maintained with EB-rMAP as the PoS curve almost overlaps with that of the original MAP ( = 0). On the other hand, the absolute bias of EB-rMAP is similar to that of the vague prior ( = 1) which is expected to yield the smallest bias. EB-rMAP prior is also comparable with vague prior in terms of MSE.
3.2 Binary Endpoint
In the simulation with binary endpoint, we use AS dataset from RBesT package which contains data from 8 clinical trials in ankylosing spondylitis. Improvement in ankylosing spondylitis is assessed by Assessment of SpondyloArthritis International Society (ASAS) score containing four domains. The binary efficacy endpoint is ASAS20 at week 6, which is defined as an improvement of at least 20% and an absolute improvement of at least 1 unit (on a 0-10 scale) in at least three of four domains, with no worsening of the remaining domain. A meta-analysis yields a point estimate of 0.25 (95% CI: 0.20, 0.31) for the ASAS20 rates from historical data.
Denote the ASAS20 rate. The original MAP prior method assumes that the log-odds of ASAS20 rates in historical and current studies arise from a common normal distribution , where . We posit a prior for
, and the overall mean logithas a normal prior which is considered weakly-informative on the log-odds scale. The derived MAP prior has an ESS of 37.7. Since the original MAP prior for a binary endpoint is approximated by a mixture of Beta distributions, the vague component to construct robust MAP prior should also be a Beta prior and we use (ESS = 2) in the simulation.
We compare EB-rMAP approach () with robust MAP prior with fixed weight , using the same three operating characteristics as the previous subsection. The current single arm study has a sample size of 50. We perform 5,000 iterations at each true current ASAS20 rate considered, ranging from 0.20 to 0.32. The decision rule to claim trial success is .
Results presented in Figure 3 suggest that EB-rMAP approach strikes a good balance between PoS and estimation quality. It has nearly identical PoS with the original MAP prior. Meanwhile, its bias and MSE are mostly between those of original MAP prior and the vague prior.
3.3 Time-to-Event Endpoint
In this subsection, we evaluate the performance of EB-rMAP prior with TTE endpoint using simulated historical data. Without loss of generality, we set in PWE model. This essentially assumes the TTE is exponentially distributed so that the relationship between hazard rate and median overall survival (mOS) is: . We consider four historical studies and fix their total exposures (in years) at 5, 10, 15 and 20, respectively. The numbers of deaths in each trial are simulated from a Poisson distribution according to (5), where the hazard rate is set to 0.4 for all four historical studies. This translates to a median overall survival of 1.73 years. For the current trial, the total exposure is fixed at 30 years, and we consider different hazard rates ranging from 0.3 to 0.7 (mOS from 1 to 2.31 years). At each current hazard rate, 1,000 simulations are conducted.
The hyper-priors in MAP prior (6) are
The normal prior is very weakly-informative for log-hazard rate . The prior covers small to large between-trial variability and therefore corresponds to moderate borrowing. The mean of the vague Gamma prior in robust MAP prior, whose effective number of events is 1, is determined by the median of the original MAP prior. We compare EB-rMAP approach () to robust MAP with in the same three metrics as before: PoS, absolute bias and MSE. The PoS is corresponding to the decision rule .
Simulation results are reported in Table 2 for the five current hazard rates considered. When the current hazard rate , it can be seen that the EB-rMAP prior preserves power nicely and generally has small bias. The MSE is between that of the original MAP () and vague prior (). On the other hand, when , meeting the decision rule leads to an erroneous conclusion. In this case, the PoS (error rate) and bias of EB-rMAP are comparable to those of the vague prior. This showcases the robustness of the EB-rMAP prior.
|PoS||Abs. Bias||MSE||PoS||Abs. Bias||MSE||PoS||Abs. Bias||MSE|
|PoS||Abs. Bias||MSE||PoS||Abs. Bias||MSE|
|Absolute bias and MSE are multiplied by 1000.|
4 An Example of Analysis with Time-To-Event Data
In this section, we illustrate the EB-rMAP approach for the TTE endpoint via re-analyzing the data from 10 oncology studies used in Roychoudhury and Neuenschwander RoychoudhuryNeuenschwander_2020. Ideally, individual level historical data should be used to ascertain the number of events and total exposure . When they are unavailable, the quantities may be extracted under certain assumptions from Kaplan-Meier plots that are usually presented in medical publications ParmarEtal_1998, as was the case in Roychoudhury and Neuenschwander RoychoudhuryNeuenschwander_2020. The four year follow-up period is partitioned into 12 intervals in the PWE model. Nine studies are regarded as the historical data, while the remaining one is the current data. Those data are presented in Appendix A.
The original MAP prior is derived for each interval using corresponding historical data. We use the same priors for the hyper-parameters in (6) in all time intervals: for ,
Similar to the case with simulation, the vague gamma prior to construct robust MAP prior has an effective number of events of 1. Its mean equals to the median of the interval-specific MAP prior. The cutoff of in EB-rMAP prior is 0.75.
Analysis results by interval are reported in Table 3. The second to fourth columns display the median of MAP prior derived from historical data, the observed hazard rate in current trial () and the mixture weight by EB-rMAP approach. While is determine by multiple factors, the general trend regarding the point estimates is that the more different the historical and current hazard rates are, the larger the weight
is. The last three columns shows the posterior median hazard rate and the 95% credible interval for respective methods. The analyses utilizing the original MAP and vague priors are sometimes referred to as exchangeability (EX) and non-exchangeability (NEX) model, respectivelyNeuenschwanderEtal_2016. It can be seen that EB-rMAP approach is equivalent to the EX or NEX model when is respectively 0 or 1. In other cases, its posterior median hazard rate and width of credible interval are generally in-between the counterparts in the EX and NEX model. Finally, Figure 4 shows the posterior distributions of hazard rate by each method in the (1.25, 1.50] years interval. The NEX model expectedly has the widest spread, while the density of EB-rMAP has slightly heavier right tail than that of EX model when .
|Interval||Historical||Current||EB-rMAP||MAP (EX)||Vague (NEX)|
|0.00-0.25||0.169||0.043||1.00||0.035 (0.002, 0.164)||0.079 (0.018, 0.215)||0.035 (0.002, 0.170)|
|0.25-0.50||0.204||0.221||0.03||0.208 (0.110, 0.361)||0.209 (0.112, 0.360)||0.207 (0.074, 0.445)|
|0.50-0.75||0.326||0.854||0.96||0.809 (0.483, 1.252)||0.744 (0.444, 1.199)||0.814 (0.482, 1.266)|
|0.75-1.00||0.557||0.000||1.00||0.015 (0.000, 0.141)||0.217 (0.035, 0.501)||0.015 (0.000, 0.144)|
|1.00-1.25||0.552||0.114||0.20||0.178 (0.029, 0.520)||0.339 (0.100, 0.567)||0.120 (0.024, 0.354)|
|1.25-1.50||0.298||0.427||0.57||0.362 (0.191, 0.685)||0.355 (0.198, 0.619)||0.402 (0.172, 0.777)|
|1.50-1.75||0.374||0.552||0.83||0.475 (0.253, 0.908)||0.432 (0.266, 0.735)||0.518 (0.242, 0.962)|
|1.75-2.08||0.406||0.233||0.52||0.296 (0.092, 0.530)||0.329 (0.142, 0.541)||0.224 (0.072, 0.514)|
|2.08-2.50||0.279||0.000||1.00||0.003 (0.000, 0.084)||0.049 (0.005, 0.185)||0.003 (0.000, 0.081)|
|2.50-2.92||0.124||0.305||0.00||0.243 (0.105, 0.493)||0.242 (0.104, 0.487)||0.280 (0.110, 0.571)|
|2.92-3.33||0.052||0.115||1.00||0.094 (0.014, 0.310)||0.074 (0.022, 0.185)||0.094 (0.014, 0.309)|
|3.33-4.00||0.068||0.000||1.00||0.000 (0.000, 0.026)||0.019 (0.001, 0.094)||0.000 (0.000, 0.026)|
|Median hazard rate of original MAP prior derived from historical data.|
|Observed current hazard rate ().|
|Posterior median hazard rate and 95% credible interval.|
5 Concluding Remarks
We propose a novel empirical Bayes robust MAP prior to address an important practical challenge in implementing robust MAP prior: to properly specify the mixture coefficient . Built upon the Box’s prior predictive p-value that quantifies prior-data conflict, our EB-rMAP framework allows adaptively borrowing historical data in a data-dependent manner. The computation in EB-rMAP prior utilizes existing software packages which greatly reduces the amount of programming. R programs are avaliable on GitHub: (link will be provided upon manuscript acceptance). The optimization procedure can be more stable and efficient than existing empirical Bayes methods as the empirical Bayes mixture coefficient is bounded within . Simulation studies suggest that the EB-rMAP method strikes a good balance between maintaining probability-of-success and yielding robust point estimates. Our unified framework seamlessly applies to most commonly encountered data types. Compared to the empirical Bayes MAP prior LiEtal_2016, the novel EB-rMAP prior has one additional tuning parameter that need to be prespecified. While this grants more flexibility, extensive simulation studies are required to choose the value for that yields satisfactory operating characteristics.
The endpoints considered in our simulation studies and data analysis are all efficacy-oriented. Meanwhile, an important binary safety endpoint often encountered in phase 1 dose escalation studies is the presence of dose-limiting toxicity (DLT). Bayesian logistic regression modelNeuenschwanderEtal_2008BLRM (BLRM) is a popular model-based approach to fit cumulative DLT data and to make dose recommendation for the next cohort of patients. The model has two parameters, namely intercept and slope, for which a bivariate normal prior is posed. Historical information on the dose-toxicity profile may be incorporated via an MAP prior NeuenschwanderEtal_2015. The operating characteristics of BLRM can depend heavily on the bivariate normal prior due to the limited sample size in dose escalation trials. Therefore, properly handling of prior-data conflict is crucial to ensure the robustness. The idea of robust MAP prior has been extended to BLRM via the exchangeable-non-exchangeable (EX-NEX) model NeuenschwanderEtal_2016. Pre-specifying the weights for the EX or NEX model is critical and follows the same principle with the rMAP prior. However, it is challenging to apply the EB-rMAP prior method in this setting because it is not obvious how to obtain the mixture that approximates the original MAP prior under BLRM. Future research is needed to explore alternative approaches to quantify the prior-data conflict.
The EB-rMAP prior adjusts the extent of borrowing based on the agreement between historical and current trials. As the real world evidence (RWE) becomes increasingly important in recent years, information from external sources, especially observational studies, is brought into the historical data borrowing paradigm. In addition to the discrepancy in responses, the distributions in baseline covariates from external sources may be different from those in the current trial. Ignoring such imbalances may lead to bias and erroneous conclusions. This fact prompts new methods being proposed to take the baseline covariates into consideration when adjusting the extent of borrowing WangEtal_2019, LiuEtal_2021_PSMAP. It is of future research interest to explore whether EB-rMAP prior can be implemented in combination with such methods to yield more robust analyses with RWE.
Data Availability Statement
Data sharing not applicable to this article as no new datasets were generated or analyzed during the current study.
Appendix A Data Used in the TTE Analysis Section
Roychoudhury and Neuenschwander RoychoudhuryNeuenschwander_2020 extracted the data from 10 published Kaplan-Meier curves using the Parmar et al. technique ParmarEtal_1998. The follow-up period was divided into 12 time intervals (in years). The first nine studies were regarded as historical trials, while the tenth one current trial. The number of events and total exposure in years are reported in the table below.