Power prior models for treatment effect estimation in a small n, sequential, multiple assignment, randomized trial

12/10/2020
by   Yan-Cheng Chao, et al.
0

A small n, sequential, multiple assignment, randomized trial (snSMART) is a small sample, two-stage design where participants receive up to two treatments sequentially, but the second treatment depends on response to the first treatment. The treatment effect of interest in an snSMART is the first-stage response rate, but outcomes from both stages can be used to obtain more information from a small sample. A novel way to incorporate the outcomes from both stages applies power prior models, in which first stage outcomes from an snSMART are regarded as the primary data and second stage outcomes are regarded as supplemental. We apply existing power prior models to snSMART data, and we also develop new extensions of power prior models. All methods are compared to each other and to the Bayesian joint stage model (BJSM) via simulation studies. By comparing the biases and the efficiency of the response rate estimates among all proposed power prior methods, we suggest application of Fisher's exact test or the Bhattacharyya's overlap measure to an snSMART to estimate the treatment effect in an snSMART, which both have performance mostly as good or better than the BJSM. We describe the situations where each of these suggested approaches is preferred.

READ FULL TEXT VIEW PDF

Authors

page 3

page 6

page 7

page 10

page 11

page 12

page 13

page 17

06/25/2018

An Outcome Model Approach to Translating a Randomized Controlled Trial Results to a Target Population

Participants enrolled into randomized controlled trials (RCTs) often do ...
03/24/2022

Making SMART decisions in prophylaxis and treatment studies

The optimal prophylaxis, and treatment if the prophylaxis fails, for a d...
10/20/2020

Multivariate prediction of mixed, multilevel, sequential outcomes arising from in vitro fertilisation

In vitro fertilization (IVF) comprises a sequence of interventions conce...
07/23/2021

A hierarchical prior for generalized linear models based on predictions for the mean response

There has been increased interest in using prior information in statisti...
10/12/2020

A Matching Procedure for Sequential Experiments that Iteratively Learns which Covariates Improve Power

We propose a dynamic allocation procedure that increases power and effic...
07/31/2021

Estimation and visualization of treatment effects for multiple outcomes

We consider a randomized controlled trial between two groups. The object...
01/21/2022

Individual Treatment Effect Estimation Through Controlled Neural Network Training in Two Stages

We develop a Causal-Deep Neural Network (CDNN) model trained in two stag...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In rare disease studies, estimating treatment effects efficiently is often a challenging task because information is collected from a relatively small number of participants. Developed to meet this challenge, a small n, sequential, multiple assignment, randomized trial (snSMART) is a two-stage design where participants are given up to two treatments sequentially; whether they receive the same or different treatment in the second stage depends on how they respond to the first stage treatment (tamura2016small). Primary interest in an snSMART is the first stage treatment effect, but when multiple outcomes are obtained from each participant, a method to combine the information across stages can be used to efficiently estimate the treatment effects of interest.

Frequentist and Bayesian approaches have been proposed to pool the results together for estimation. tamura2016small presented a weighted Z-statistic to perform the estimation, but the Z-statistic is not based on all the collected data. To address these limitations, wei2018bayesian and chao2020dynamic presented both a Bayesian joint stage model (BJSM) and a joint stage regression model, each of which includes parameters that link first and second stage treatment responses to provide more efficient treatment effect estimates. Here, we present an alternative approach that links data from the two stages through a power prior, which was first proposed by ibrahim2000power.

A power prior contains the likelihood of the historical data, power parameters that quantify the compatibility of the historical and the current data, and prior distributions for the parameters in the likelihood of the current data. The power parameters can be either fixed or random and there are numerous ways the parameters are specified or determined. Extensions of this power prior approach include modified power priors, or normalized power priors (duan2006evaluating; neuenschwander2009note; hobbs2011hierarchical; banbeta2019modified; van2018including), power prior in Bayesian hierarchical models (chen2006relationship), commensurate power priors (hobbs2011hierarchical; van2018including), power priors with an empirical Bayesian approach (gravestock2017adaptive) and power priors with a likelihood-based weight selection criterion (ibrahim2003optimality; ibrahim2015power).

pan2017calibrated proposed a calibrated power prior that utilizes a nonparametric Kolmogorov-Smirnov statistic to measure the compatibility of historical and current data in biosimilar designs. nikolakopoulos2018dynamic developed another calibrated power prior that quantifies the conflict of historical to current data through prior predictive p-values. li2020pa applied the notion of a power prior model to control information borrowing through Bayesian model averaging between pediatric and adult phase I oncology trials.

In previous studies, the idea of power prior models was applied to control how much information should be borrowed from historical data or earlier trials to a current trial. However, information sharing is also crucial in a multistage clinical trial, which motivates our work. In this study, we propose a novel application of power prior models to the estimation of treatment effects in an snSMART, which is a two-stage design. In addition, we first introduce novel measures of closeness to describe the compatibility of stage 1 and 2 data in our snSMART. In our setting, we consider stage 1 responses as “current” data and stage 2 responses as “historical” data, which may seem counterintuitive. However, because a second stage outcome is obtained after a first stage outcome, second stage outcomes are conditional on the treatments received in the first stage and response to that first stage treatment. Because of this biased sampling scheme, the second stage outcomes are viewed as supplemental data, and the first stage outcomes are viewed as the primary data, since they are collected in an unbiased, randomized design.

Small sample size is another challenge when applying power prior models to the snSMART setting. In existing designs, the historical data are often assumed to come from a multitude of participants who received the same treatment. In contrast, in an snSMART, it is possible that outcomes will only be obtained from a very small number of participants in the second stage. The operating characteristics of power prior models with small samples has not been investigated before, and thus, we seek to examine their performance in the snSMART setting relative to the existing BJSM.

In our current work, we propose three different power prior models to estimate the response rates of three active treatments in an snSMART. In Section 2, we motivate the use of power prior models in snSMART designs and briefly describe the existing BJSM. In Section 3, we present the power prior models with different power parameter specification approaches. In Section 4, we use simulations to examine how these power prior models perform and compare them to the BJSM under different scenarios, and we close with a discussion in Section 5.

2 Motivating example and existing methods

2.1 ARAMIS trial

Our methods are motivated by the snSMART, A RAndomized Multicenter study for Isolated Skin vasculitis (ARAMIS) (micheletti2020protocol), wei2018bayesian and chao2020dynamic and shown in Figure 6.1. In brief, all enrolled individuals are randomized to one of the three treatments in the first stage. During a specific period of follow-up of six months, each individual is assessed for a response. The individuals who respond in the first stage receive the same treatment in the second stage, while non-responders in the first stage are randomized to one of the alternative treatments in the second stage and followed for six more months for response.

The first stage is a traditional randomized trial; thus, we can estimate treatment effects using only the first stage data. In the proposed power prior methods, these first stage outcomes are called “current data”. By contrast, the second stage outcomes alone could not be used to correctly estimate the response rates because they are conditional on first stage treatment and responses to that treatment. Thus, second stage outcomes serve as “historical” data. Inclusion of “historical” data can provide additional information and increase the efficiency of estimation of treatment effects in small samples. Thus, the application of power prior models to our setting provides a way to incorporate both stages of data such that first stage data are weighted fully, and second stage data receive partial weight through the power prior to provide more efficient treatment estimates in small samples.

2.2 Joint stage models

Frequentist and Bayesian joint stage models are existing approaches that estimate the treatment effects in an snSMART, where the details can be found in wei2018bayesian and chao2020dynamic. Because the results from both models are similar, we briefly present the BJSM here due to our focus on Bayesian methods.

The (first stage) response rate of a treatment is denoted by , where . Since the response rate of a treatment in the second stage can differ from that in the first stage, and because stage 2 response rates are conditional on stage 1 treatments and responses, we denote the second stage response rates of the first stage responders to treatment by , and the second stage response rates of the first stage non-responders to who receive in the second stage by . and are called linkage parameters for stage 1 responders and non-responders, respectively, because they link the first stage and second stage response rates. An assumption of the BJSM is that the linkage parameters, and , do not depend on the first and second stage treatments received. The parameters, , and

, can be estimated via Markov Chain Monte Carlo with appropriate prior distributions on these parameters.

However, we may not have a priori information about the possible relationship between first stage and second response rates, particularly in the rare disease settings, which may make it difficult to pre-specify prior distributions of the linkage parameters. Thus, the power prior approaches presented next provide a framework to circumvent the requirement of assuming the proportionality of response rates from the stage 1 to 2.

3 Methods

We first briefly review the power prior models and their associated notation. We let , where the elements are the response rates of treatments A, B, and C, respectively, and denote power parameters for different subgroups of individuals, where and is the number of subgroups. In our design, we separate the second stage data into two distinct sets: those from first stage responders and those from first stage non-responders. The individuals in these two subgroups are assumed to share some common within-group characteristics that may affect how they respond to the second stage treatments. Thus, each subgroup can be regarded as a distinct set of “historical” data, and we assume that in this study. We also made this assumption of because a parsimonious model is preferred when the sample size is small, and two power parameters mimics the two linkage parameters from the BJSM. Let and denote the number of individuals assigned to treatment and the corresponding number of responders to in stage 1, respectively, where . Similarly, we let and be the numbers of individuals in stage 2 assigned to treatment within subgroup and the corresponding number of responders to in subgroup , respectively. Let and .

In its simplest form, the joint power prior distribution of the first stage response rates in our setting can be formulated as

(1)

where is a likelihood function for second stage outcomes, is the initial prior for , and for all . We interpret as a measure of compatibility of the “current” data and the “historical” data from subgroup . When , the corresponding “historical” data, i.e., second stage data, from subgroup contribute nothing to the estimation of response rates, while indicates that the corresponding “historical” data from subgroup can be pooled together with “current” data. When combining with the likelihood function of first stage outcomes, the posterior distribution of is

(2)

The key issue in the application of power prior models lies in the choice of . Thus, we next introduce three types of approaches for choosing and investigate to what extent stage 2 data can be incorporated with stage 1 data to estimate .

3.1 Power prior models with likelihood-type criteria

The power parameters and can be taken as fixed values and determined by likelihood-type criteria, which was first proposed by ibrahim2003optimality and extended from Bayesian Information Criterion (BIC). The rationale of utilizing likelihood-type criteria is to use both “current” and “historical” data to choose the optimal values for and that minimize the criteria function. Two criteria applied to power prior models are the penalized likelihood-type criterion (PLC) (ibrahim2003optimality; ibrahim2015power) and the marginal likelihood criterion (MLC) (ibrahim2015power; gravestock2017adaptive), the latter of which is also referred to as the empirical Bayesian method.

For the PLC, the “current” and “historical” data are combined in the function

(3)

where is a constant unrelated to any of the parameters, and is a beta function. The power parameters and can then be determined by minimizing the PLC function

(4)

The penalty term allows for the chosen being higher when the sample size of subgroup is larger, which corresponds to more weight applied to a subgroup with a larger sample size. After the optimal is determined by , we then treat as fixed and use Equation (3) to obtain the posterior distribution of all .

For the MLC, we use the marginal likelihood of

(5)

where is a constant unrelated to any of the parameters. Values for the power parameters are determined as .

3.2 Modified power prior model

The modified power prior (MPP) model proposed by duan2006evaluating treats and

as random variables;

banbeta2019modified applied the MPP to estimate treatment effects that incorporate control arms into a current trial. In our study, the MPP is given by

(6)

where

(7)

and is an initial prior distribution of . The normalizing constant is necessary in the formulation of MPP when is random to enforce the likelihood principle (duan2006evaluating; banbeta2019modified).

We assume that and are distributed as Binomial and Binomial, respectively. The initial prior distributions and are Beta and Beta, respectively. After plugging in these distributions and likelihood functions to Equation (6), we can analytically derive the MPP as follows, which is a multi-parameter version of the formula derived in banbeta2019modified:

(8)

The choice of hyperparameters

, , and reflects our belief in the response rates of treatments and the compatibility of “current” and “historical” in our snSMART. If we do not have previous knowledge about and , their prior distribution can be set as Beta(1,1).

3.3 Power prior model with closeness measure

In addition to likelihood-based approaches, we can define a metric that describes the closeness of the posterior distributions of first stage and second stage response rates. A natural choice of such a metric is Bhattacharyya’s overlap measure (BOM) (bhattacharyya1946measure)

. If distributions from two populations are continuous with probability density functions

and , the BOM is defined as . The BOM is useful in our setting because it takes values in the interval , in which indicates that two distributions are fully separated, while means that two distributions are identical. This agrees with the interpretation of power parameters in power prior models.

We define the posterior distributions of response rates of treatment in stage 1 and stage 2 (within a specific subgroup ) as and , respectively, where and . Because we assume that the prior distributions of

are Beta distributions, the posterior distributions

and will also follow Beta() and Beta(), respectively. Thus, we have

(9)

where , , , . We then derive values for and as the average of BOM for all three treatments, or .

Alternatively, the two-sided p-value of a Fisher’s exact test (FET) from stage 1 and stage 2 data from subgroup can be used to quantify the closeness of treatment response rates in both stages. Specifically, we construct a table where the rows contain the numbers of participants from stage 1 or stage 2 subgroup

and the columns contain the numbers of responders or non-responders. The two-sided p-value is computed using all the tables that are equally or more extreme than the observed table where extremity is defined by a table’s hypergeometric probability. If the response rates change across the stages, the p-value from the FET should be small, suggesting that the data from stage 1 and stage 2 subgroup

are incompatible. On the contrary, if the response rates do not change across the stages, we can expect a p-value close to 1, indicating that a higher weight should be put on the “historical” data in subgroup . Similar to the , we can calculate , in which is the p-value for subgroup and treatment .

4 Simulation studies

4.1 Data generation

We conducted Monte Carlo simulations to compare the performance of the power prior models described in Section 3. The seven scenarios that we examined are listed in Table 6.1

. In all scenarios in stage 1, exactly 1/3 of participants are assigned to each of the three possible treatments. Their stage 1 responses are generated by a Bernoulli distribution with the response rates corresponding to the assigned treatments, shown in Table

6.1(a). Their stage 2 responses are also generated by a Bernoulli distribution with the response rates corresponding to the assigned stage 1 and 2 treatments, shown in Table 6.1(b). In scenarios 1-5, the first stage response rates of the treatments differ from each other, whereas these response rates are identical in scenarios 6 and 7. The last two scenarios can be used to examine the performance of estimation under the “null” cases.

The rationale of designing the scenarios is as follows:

Scenario 1

The response rates remain unchanged in stage 2; there is full compatability between stage 1 and 2 data.

Scenario 2

The stage 2 response rates double if participants respond in stage 1; there is full compatibility between stage 1 data and stage 2 data only for stage 1 non-responders.

Scenario 3

The stage 2 response rates are halved for participants who do not respond in stage 1; there is full compatability between stage 1 data and stage 2 data only for stage 1 responders.

Scenario 4

The stage 2 response rates increase, but the scale of increase differs between stage 1 responders and non-responders; there is not full compatibility between stage 1 and stage 2 data.

Scenario 5

Stage 2 response rates change with respect to both first and second stage treatments, which violates a main assumption of the BJSM; there is not full compatibility between stage 1 and stage 2 data.

Scenario 6

All stage 1 and stage 2 response rates are equal; there is full compatibility between stage 1 and 2 data.

Scenario 7

Response rates are the same in stage 1 but not stage 2, and these depend on both first and second stage treatment (this violates a main assumption of the BJSM); there is not full compatibility between stage 1 and stage 2 data.

In Section 4.2, we use scenarios 1-4 to investigate the impact on when a part of or the whole stage 2 data are not compatible with the stage 1 data. We expect that: (1) both and are close to 1 in scenario 1; (2) should should move closer to 0 in scenario 2; (3) should move closer to 0 in scenario 3; (4) both and should move closer to 0 in scenarios 4. In Section 4.3, we evaluate the estimation of using scenarios 4-7, with which we compare the performance either within different power prior models or between power prior models and the BJSM. We also examine whether partial borrowing of information from second stage data () can outperform situations when instead complete borrowing () or no borrowing () is applied.

The prior distribution of was Beta(1,1) for all methods, and the prior distribution of

was Beta(1,1) in MPP. To maximize the flexibility of the BJSM, we set the prior distributions of both linkage parameters to gamma distributions with the support of

and the prior mean of 1. All simulation studies were performed with 10,000 runs, and the total sample size for each run was either 90 or 300.

4.2 Estimation of and

In Table 6.2, we present the mean estimated and

and their Monte Carlo standard errors obtained from five different power prior models in scenarios 1-4. Presently, we restrict our focus on scenarios 1-4 because these scenarios are designed to examine how

changes when data from the two stages become incompatible.

When , we first observe the differences in when comparing scenarios 2-4 to scenario 1, in which the data from stages 1 and 2 are fully compatible. In MLC, the mean estimated is 0.65 in scenario 1 compared to 0.32 in scenario 2 and 0.08 in scenario 4 where the stage 2 data from stage 1 responders is not compatible with the stage 1 data. The mean estimated is from 0.75 in scenario 1 compared to 0.40 in scenario 3 and 0.45 in scenario 4 where the stage 2 data from stage 1 non-responders is not compatible with the stage 1 data.

Similarly, we can see the same pattern in MPP, PLC, BOM and FET, but the scale of difference varies. The differences are about 0.2 to 0.4 when comparing from scenario 1 to scenarios 2-4 in FET and BOM, 0.1 to 0.2 in MPP, and less than 0.1 in PLC. The differences become larger when for all methods. However, there is a trade-off between the difference in across various scenarios and the standard errors of estimated . The estimated from MLC have much larger standard errors than that of all other methods. In contrast, the estimates from PLC slightly change across different scenarios, resulting in relatively small standard errors of the estimates.

In addition, we also investigated the ranges of the mean estimated from different methods. When , the values of from the BOM are close to 0.5 even when the data from two stages are incompatible, which indicates that the BOM tends to put higher weights on “historical” data, regardless of the compatibility of first and second stages data. In contrast, the values of from PLC are between 0.2 and 0.35 in all scenarios, which agrees with the finding in ibrahim2003optimality that the estimated from this method is relatively small in general. For MPP, MLC and FET, the values of are greater than 0.5 when data from two stages are compatible, whereas the values of are smaller than 0.5 if data are incompatible.

We note that data compatibility is not the only driving force of the value of for MPP. The prior distribution of also plays an important role in the range of mean estimated . In Table 6.3, we let the prior distributions of be Beta(0.4,1.6), Beta(1,1), and Beta(1.6,0.4), which correspond to the prior means of 0.2, 0.5 and 0.8, respectively. We can see that the range of is centered at the prior mean of , especially when . When , the data have more capacity to adjust the estimated in addition to the influence from the prior distributions. Thus, we conclude, similar to neuenschwander2009note, that the specification of the prior distribution of can greatly impact the results from the MPP method. The mean estimated under scenarios 5-7 for all methods can be found in the supplementary material Table LABEL:suptab:bayes_delta_567.

We further examine the distributions of estimated from different methods under scenarios 1-4 in Figure 6.2 when . The histograms from the PLC under four scenarios do not differ much, indicating that the power parameters obtained from PLC do not vary with the changing scenarios. For MLC, the chance of choosing 0 or 1 for power parameters is extremely high, which is not a desirable property because second stage data are likely to be completely ignored even when the data across stages are fully compatible. This result suggests that the estimated from the power prior model with MLC is highly sensitive to slight changes in the number of responders. In particular, when the expected number of responders to a treatment in a subgroup in stage 2 is smaller than 10, which may be common in an snSMART, a change in the observed number of responders by 1 or 2 can result in a sharp decrease of the estimated from 1 to 0 or vice versa. The histograms from the MPP, BOM and FET are more appealing. In scenario 1, a large portion of distributions of and can overlap, while in other scenarios, we can easily see the move of either one or both distributions when part of or all the second stage data are not compatible with first stage data.

When , the distributions for and in supplementary Figure LABEL:fig:hist300. For FET, BOM and MPP, due to the increased sample size, the distributions move more when the data are incompatible, compared to the histograms in Figure 6.2. For MLC, it seems that the chance of assigning the wrong power parameters becomes lower compared to , but completely ignoring the second stage data is still undesirable even when the data across stages are not compatible. Borrowing some information from incompatible second stage data may still increase efficiency given that the bias may increase as well, which we will discuss in next Section 4.3. The distributions of and under scenarios 5-7 can be found in the supplementary material Figures LABEL:supfig:hist90_567 and LABEL:supfig:hist300_567.

4.3 Estimation of

In Figure 6.3, each bar is the simulation-wide average absolute value of bias or root mean squared error (rMSE) of the three treatment response rate estimates from each of the methods. We include results for power prior models when is fixed at 0 or 1 for reference, as these two approaches only perform well in either fully compatible or highly incompatible scenarios, and are not preferred in most realistic settings.

In scenario 4, we first note that BJSM has smallest bias and rMSE among all methods because the assumption of the linkage parameters is met in this scenario. Among all the power prior methods, we expect some bias because stage 2 data are highly incompatible with stage 1 data. Although the estimation from MLC is least biased because the estimated are close to 0 in a large portion of simulated runs, we see that the rMSE of MLC is close to that from MPP, PLC and FET due to the high Monte Carlo variability of the MLC estimates. In scenario 5, the power prior models are more able to appropriately weight the second stage data, leading to lower rMSE compared to the BJSM because of violation to assumptions needed for the BJSM.

In scenario 6, the data from two stages are compatible, and although the bias for all methods is small, we see that the rMSEs of BOM are smaller than other methods. This is because the distributions of and for BOM in supplementary Figure LABEL:supfig:hist90_567 are clustered at the right half of the distribution, indicating power parameters closer to 1 compared to other histograms.

Scenario 7 is similar to scenario 5 in terms of data incompatibility and violation of an assumption of the BJSM, but the level of data incompatibility is less strong according to Figure LABEL:supfig:hist90_567. Thus, we see that the rMSE of the power prior models is lower than the rMSE from the BJSM. The details of the bias and rMSE for all methods under scenarios 1-7 can be found in Tables LABEL:suptab:bias_90 and LABEL:suptab:rMSE_90 of the supplementary material. We also have examined the patterns of bias and rMSE when or , and the patterns are similar to (results not shown). Thus, the power prior models can still be applied to snSMARTs with even smaller sample sizes.

5 Discussion

Overall, we do not recommend use of the power prior models with the MPP, PLC or MLC. For the MPP, the choice of highly depends on its prior distribution, especially in an snSMART where the sample size is small. For the PLC, the estimated stay relatively constant across different scenarios in snSMARTs, regardless of whether the data from the two stages are compatible. For the MLC, the mean estimated can change along with the compatibility across two stages, but the value for is highly sensitive to small changes in number of responders in an snSMART. This sensitivity of the MLC leads to a high chance of choosing 0 or 1.

Therefore, we feel that PLC and MLC should not be used to estimate response rates in an snSMART because it is undesirable to choose a fixed value or extreme values of 0 or 1 with high probability. MPP is not preferred as well because the weights highly depend on their prior distributions.

Hence, the suggested candidate models for treatment effect estimation in an snSMART are the BJSM and the power prior models with BOM or FET when considering both the performance of treatment effect estimation and reasonable values of .

For the FET, we acknowledge that the stage 2 outcomes from stage 1 responders and their stage 1 outcomes may not be independent, which is an assumption of the Fisher’s exact test, but we believe that the smaller p-values are reasonable because less weight should be put on second stage data when the within-individual correlation of first and second stage responses can affect the determination of dependency between stages and responses to treatment. Moreover, the number of correlated observations is likely to be small especially in rare disease trials. For the BOM, we can see that the assigned weights to “historical” data tend to be larger than the weights from other methods. For the BJSM, we need to assume that the relationship between first and second stage response rates can be described proportionally through the linkage parameters. This assumption may be difficult to justify.

When selecting a primary method of analysis, some background information about the treatments of interest in an snSMART may influence model choice in the estimation of treatment effects. If investigators believe that the second stage response rates are proportional to the first stage response rates and the proportionality (linkage) parameters do not depend on first and second stage treatments, then the BJSM may be preferred since it is most efficient when its assumptions are met. For example, the BJSM can be used if we believe that the response rates of all treatments will double in the second stage for all first stage responders. However, if this assumption is violated, which may be very likely, then power prior models with BOM or FET may be considered. The BOM is preferred if the data from two stages are more compatible, while the FET is preferred in the cases of less compatibility between data from two stages. If prior information about possible first and second stage response rates of all treatments exists, simulation studies can be conducted to help decide the prior distribution of and .

An extension of the SMART is the proposal by liu2017sequential that the design be enriched at later stages of the trial by the inclusion of subjects that received previous stage treatments outside of the trial. They used the term, SMARTER, for a SMART with enrichment. While this design assumed larger sample sizes, the same idea can apply to an snSMART. In an snSMART, it is not clear how a subject’s information outside of the trial should be incorporated by the BJSM. However, this enrichment is not a problem for the power prior model methods since these methods do not link an individual subject’s responses between stages. Thus, our power prior models might be more appropriate for SMARTER designs.

Moreover, a different number of subgroups in stage 2 of an snSMART can be pre-specified instead of in our study. In simulations, we have tried , where the can differ depending on the individuals’ first stage responses and their stage 1 treatment assignments. However, due to the resulting small sample sizes in each subgroup, the extra power parameters did not improve the bias and efficiency of the estimation (results not shown). The application of a Bhattacharyya’s overlap measure or Fisher’s exact test in power prior models is not limited to our snSMART settings, but also can be used in more general cases when data from historical trials are used to facilitate the data analysis of a current clinical trial. In this setting, the potential issue of independence between samples no longer exists because patients from different trials should be uncorrelated.

6 Acknowledgment

This work was supported through a Patient-Centered Outcomes Research Institute (PCORI) Award (ME-1507-31108).

References