1 Introduction
Randomized control trials (RCTs) are the gold standard for estimating the causal effect of a treatment. An RCT may give an unbiased estimate of the Sample Average Treatment Effect (SATE), but external validity is an issue when the individuals in the RCT are unrepresentative of the actual population of interest. For example, the participants in an RCT in which individuals volunteer to sign up for health insurance may be in poorer health at baseline than the overall population. External validity is particularly relevant to policymakers who want to know how the treatment effect would generalize to the broader population.
This paper improves on the transportability of clinical trial results to a population by extending a method of estimating population average treatment effects to settings with noncompliance. Previous approaches to the problem of extrapolating RCT results to a population (Imai et al., 2008; Stuart et al., 2011; Hartman et al., 2015) are designed for settings where there is full compliance with treatment. This paper contributes to the literature by defining the assumptions required to identify complier–average causal effects for the target population and proposing an estimation procedure to recover this estimand.
Hartman et al. (2015) propose a method of reweighting the responses of individuals in an RCT according to the distribution of covariates in the target population in order to estimate the population average treatment effect on the treated (PATT). We extend the method to estimate the complier–average causal effects for the target population from RCT data with noncompliance, and refer to this estimator as PATTC. Noncompliance occurs when individuals who are assigned to the treatment group do not comply with the treatment; for individuals assigned to control, we are unable to observe who would have complied had they been assigned treatment. Noncompliance in treatment assignment is a prevalent issue in RCTs and biases the intention–to–treat (ITT) estimate towards zero.
PATTC involves the expectation of the response of RCT compliers, conditional on their covariates, where the expectation is taken over the distribution of covariates for population members receiving treatment. Note that our estimation strategy differs from reweighting methods that use propensity scores to adjust the RCT data (Stuart et al., 2011). In this context, the propensity score model predicts participation in the RCT, given pretreatment covariates common to both the RCT and population data. Individuals in the RCT and population are then weighted according to the inverse of the estimated propensity score. We propose an alternative approach of predicting the response surface for RCT compliers, and then use the predicted values from the response surface model to estimate the potential outcomes of population members who received treatment, given their covariates.
When estimating the average causal effect from an RCT, researchers typically scale the ITT estimate by the compliance rate under the identifying assumptions outlined in Angrist et al. (1996). When extrapolating RCT results to a population, one might simply weight the PATT estimate by the population compliance rate in order to yield a population average effect of treatment on treated compliers.^{1}^{1}1A similar approach is used by Imai et al. (2013) for estimating average complier indirect effects. However, the compliance rate is likely to differ between the sample and population, as well as across subgroups based on pretreatment covariates. We propose an alternative approach of actually identifying the likely compliers in the control group. By explicitly modeling compliance, this approach allows researchers to decompose population estimates by covariate group and also predict which population members are likely to comply with treatment. Both of these features are useful for policymakers in evaluating the efficacy of policy interventions for subgroups of interest in a population.
We apply the proposed estimator to measure the effect of Medicaid coverage on health care use for a target population of adults who may benefit from governmentbacked expansions to the Medicaid program. We draw RCT data from a largescale health insurance experiment, in which only of those randomly selected to receive Medicaid benefits actually enrolled. We find substantial differences between sample and population estimates in terms of race, education, and health status subgroups.
The paper proceeds as follows: Section 2 presents the proposed estimator and the necessary assumptions for its identifiability; Section 3 describes the estimation procedure; Section 4 reports the estimator’s performance in simulations; Section 5 uses the estimator to identify the effect of extending Medicaid coverage to the low–income adult population in the U.S; Section 6 discusses the results and offers direction for future research.
2 Estimator
We are interested in using the outcomes from an RCT to estimate the average treatment effect on the treated for a target population. Treatment in the population is not assigned at random, but rather may depend on unobserved variables, confounding the effect of treatment on the outcome of interest. RCTs are needed to isolate the effect of treatment. However, strict exclusion criteria for RCTs often result in a sample of individuals whose distribution of covariates differs substantially from the target population.
Ideally, we would take the results of an RCT and reweight the sample such that the reweighted covariates match the those in the population. In practice, one rarely knows the true covariate distribution in the target population. Instead, we consider data from a nonrandomized, observational study in which participants are representative of the target population. The proposed estimator combines RCT and observational data to overcome these issues.
2.1 Assumptions
Let be the potential outcome for individual in group and treatment receipt . Let denote the sample assignment, where is the population and is the RCT. indicates treatment assignment and indicates whether treatment was actually received. Treatment is assigned at random in the RCT, so we observe both and when . For compliers in the RCT, .
Let be individual ’s observable pretreatment covariates that are related to the sample selection mechanism for membership in the RCT, treatment assignment in the population, and complier status. Let be an indicator for individual ’s compliance to treatment, which is only observable for individuals in the RCT treatment group.
In the population, we suppose that treatment is made available to individuals based on their covariates . Individuals with do not receive treatment, while those with may decide whether or not to accept treatment. For individuals in the population, we only observe — not .^{2}^{2}2We frame Assumptions 2.1 and 2.1 in terms of and in order to distinguish among the population controls who should have received treatment (i.e., individuals with and ) from noncompliers assigned to control (i.e., individuals with and ).
Assumption 1.
Consistency under parallel studies:
Assumption 2.1 requires that each individual has the same response to treatment, whether is in the RCT or not. Compliance status is not a factor in this assumption because we assume that compliance is conditionally independent of sample and treatment assignment for all individuals with covariates .
Assumption 2.
Conditional independence of compliance and assignment:
Assumption 2.1 implies that
, which is useful when predicting the probability of compliance as a function of covariates
in the first step of the estimation procedure. Together, Assumptions 2.1 and 2.1 ensure that potential outcomes do not differ based on sample assignment or receipt of treatment.Assumption 3.
Strong ignorability of sample assignment for treated:
Assumption 2.1 ensures the potential outcomes for treatment are independent of sample assignment for individuals with the same covariates and assignment to treatment.^{3}^{3}3Throughout, we assume individuals are sampled randomly from an infinite population. We make a similar assumption for the potential outcomes under control:
Assumption 4.
Strong ignorability of sample assignment for controls:
RCT study designs that apply restrictive exclusion criteria may increase the likelihood that there are unobserved differences between the RCT and target population, which would violate the strong ignorability assumptions.^{4}^{4}4Note that Assumptions 2.1 and 2.1 also imply strong ignorability of sample assignment for treated and control noncompliers since we assume in that compliance is also independent of sample and treatment assignment, conditional on (Assumption 2.1). However, we are interested only on modeling the response surfaces for compliers.
Figure 1 shows Assumptions 2.1, 2.1, and 2.1 in a directed acyclic graph. Treatment assignment may only depend on through , and the potential outcomes may only depend on through . From the internal validity standpoint, the role of is critical: if any relevant observed covariates are not controlled, then there is a backdoor pathway from back to and into .^{5}^{5}5We use the same across all identifying assumptions, which implicitly assumes that the observable covariates that determine sample selection also determine population treatment assignment and complier status. This choice reflects a modeling assumption of the estimation procedure described in Section 3.
Interference undermines the framework because it creates more than two potential outcomes per participant, depending on the treatment receipt of other participants (Rubin, 1990). We therefore assume no interference between units:
Assumption 5.
The potential outcomes do not depend on , .
We additionally include Assumptions 2.1 and 2.1, which are made by Angrist et al. (1996) to ensure identifiability. The former assumption ensures that crossover is only possible from treatment to control:
Assumption 6.
No defiers:
Assumption 2.1 ensures treatment assignment affects the response only through the treatment received. In particular, the treatment effect may only be nonzero for compliers.
Assumption 7.
Exclusion restriction: For noncompliers,
2.2 PattC
The estimand of interest is the Population Average Treatment Effect on Treated Compliers (PATTC):
(1) 
PATTC is interpreted as the average treatment effect on those in the population who receive treatment. It includes individuals who actually receive the treatment, but does not include those who are eligible for treatment and do not accept it (i.e., noncompliers). The following theorem relates the treatment effect in the RCT to the treatment effect in the population.
Theorem 1.
where denotes the expectation with respect to the distribution of for population members who receive treatment.
Proof.
We separate the expectation linearly into two terms and consider each individually.
Intuitively, conditioning on makes sample selection ignorable under Assumption 2.1. This is the critical connector between the third and fourth lines of the first expectation derivation.
3 Estimation procedure
There are two challenges in turning Theorem 1 into an estimator of in practice. First, we must estimate the inner expectation over potential outcomes of compliers in the RCT. In the empirical example, we use an ensemble of algorithms (van der Laan et al., 2007) to estimate the response surface for compliers in the RCT, given their covariates. Thus, the first term in the expression for is estimated by the weighted average of points on the response surface, evaluated for each treated population member’s potential outcome under treatment. The second term is estimated by the weighted average of points on the response surface, evaluated for each treated population member’s potential outcome under control.
The second challenge is that we cannot observe which individuals are included in the estimation of the second term. In the RCT control group, is unobservable, as they always receive no treatment (). We must estimate the second term of Eq. (2) by predicting who in the control group would be a complier had they been assigned to treatment. An alternative approach is to simply weight the PATT estimate by the population compliance rate in order to yield a population average effect of treatment on treated compliers. However, the compliance rate is likely to differ between the sample and population, as well as across subgroups. Explicitly modeling compliance allows us to decompose PATTC estimates by subgroup according to covariates common to both RCT and observational datasets.
The procedure for estimating using Theorem 1 is as follows:

[label=S.0,ref=S.0]

Using the group assigned to treatment in the RCT , train a model (or an ensemble of models) to predict the probability of compliance as a function of covariates .

Using the model from 1, predict who in the RCT assigned to control would have complied to treatment had they been assigned to the treatment group.^{6}^{6}6
We use a standard prediction threshold of 50% in order classify compliers,
. Adjusting the prediction threshold upward would result in more accurate classifications, although we do not explore this approach. 
For the observed compliers assigned to treatment and predicted compliers assigned to control, train a model to predict the response using and , which gives for .

For all individuals who received treatment in the population , estimate their potential outcomes using the model from 3, which gives for . The mean counterfactual minus the mean counterfactual is the estimate of .
Assumptions 2.1 and 2.1 are particularly important for estimating : the success of the proposed estimator hinges on the assumption that the response surface is the same for compliers in the RCT and target population. If this does not hold, then the potential outcomes and for target population individuals cannot be estimated using the model from 3.^{7}^{7}7Section 5.3 discusses whether the strong ignorability assumptions are plausible in the empirical application.
3.1 Modeling assumptions
In addition to the identification assumptions, we require additional modeling assumptions for the estimation procedure. First, we assume that the that determine sample selection also determine population treatment assignment and complier status. As pointed out in Section 2.1, we also require that is complete because if any relevant elements of are not controlled, then there is a backdoor pathway from back to and into . Lastly, we assume that the compliance model is accurate in predicting compliance in the training sample of RCT participants assigned to treatment and also generalizable to RCT participants assigned to control (1 and 6). Section 3.2 below describes the method of evaluating the generalizability of the compliance model.
3.2 Ensemble method
In the empirical application, we use the weighted ensemble method described in van der Laan et al. (2007) for 1 and 3 of the estimation procedure. This ensemble method combines algorithms with a convex combination of weights based on minimizing crossvalidated error. It is shown to control for overfitting and outperforms single algorithms selected by crossvalidation (Polley and Van Der Laan, 2010).
We choose a variety of candidate algorithms to construct the ensemble, with a preference towards algorithms that tend to outperform in supervised classification tasks. We also have a preference for algorithms that have a builtin variable selection property. The idea is that we input the same and each candidate algorithm selects the most important covariates for predicting compliance status or potential outcomes.^{8}^{8}8A potential concern when predicting potential outcomes is that the algorithm might shrink the treatment received predictor to zero, which would result in no difference between counterfactual potential outcomes. We select three types of candidate algorithms: nonparametric additive regression models (Buja et al., 1989)
; L1 or L2regularized linear models (e.g., Lasso or ridge regression, respectively)
(Tibshirani et al., 2012); and ensembles of decision trees (i.e., random forests)
(Breiman, 2001). L1regularized linear models are important for the application due to their variable selection properties: Lasso is particularly attractive because it tends to shrink all but one of the coefficients of correlated covariates to zero.4 Simulations
We conduct a simulation study to compare the performance of the PATT and PATTC estimators. For comparison, we compare the population estimates to the SATE, which is the ITT effect estimated from the RCT sample adjusted by the sample compliance rate.
The simulation is designed so that the effect of treatment is heterogeneous and depends on covariates which are different in the RCT and target population. The design satisfies the conditional independence assumptions in Figure 1.
4.1 Simulation design
In the simulation, RCT eligibility, complier status, and treatment assignment in the population depend on multivariate normal covariates with mean and covariances and . The first three covariates are observed by the researcher and is unobserved.
The equation for selection into the RCT is
where is standard normal. The parameter varies the fraction of the population eligible for the RCT and varies the degree of confounding with sample selection. We set the constants and to be and , respectively.
Complier status is determined by
where is standard normal, varies the fraction of compliers in the population, and varies the degree of confounding with treatment assignment. We set the constants and to .
For individuals in the population (), treatment is assigned by
where is standard normal. Varying changes the fraction eligible for treatment in the population and varies the degree of confounding with sample selection. We set the constants and to and , respectively. For individuals in the RCT (
), treatment assignment is a sample from a Bernoulli distribution with probability
. We set treatment received according to and : if and if .Finally, the response is determined by
We assume that the treatment effect is heterogeneous depending on : if and if . We set and to and to . is standard normal and are mutually independent.
We generate a population of 30,000 individuals and randomly sample 5,000. Those among the 5,000 who are eligible for the RCT () are selected. Similarly, we sample 5,000 individuals from the population and select those who are not eligible for the RCT (): these are the observational study participants.^{9}^{9}9This setup mimics the reality that a population census is usually impossible.
We set each individual’s treatment received according to their treatment assignment and complier status and observe their responses . In this design, the manner in which , , , , and are simulated ensures that Assumptions 2.1 – 2.1 hold.
In the assignedtreatment RCT group
, we train a gradient boosting algorithm
(Friedman, 2001) on the covariates to predict who in the control group would comply with treatment (), which is unobservable. These individuals would have complied had they been assigned to the treatment group. For this group of observed compliers to treatment and predicted compliers from the control group of the RCT, we estimate the response surface using gradient boosting with features and . The PATTC is estimated according to the estimation procedure outlined above.4.2 Simulation results
We vary each of the parameters , , , , and along a grid of five random standard normal values in order to generate different combinations of rates of compliance, treatment eligibility, RCT eligibility in the population, and confounding. For each possible combination of the six parameters, we run the simulation ten times and compute the average root mean squared error (RMSE) of PATTC, PATT, and the SATE. All other parameters are held constant. The PATT and PATTC estimates are obtained by estimating the response surface on all individuals in the RCT and applying 4 of the estimation procedure to the population members.^{10}^{10}10Note that the PATT estimator described here is the populationaverage causal effect of taking up treatment, adjusted according to the covariate distribution of population “compliers.” In contrast, the Hartman et al. (2015) population estimator is the ITT estimator reweighted according the to covariate distribution of the population.
Figure 2 shows the relationship between the percent of compliers in the whole population, the percentage of people in the population eligible to participate in the RCT, and the RMSE of the PATT and PATTC estimators. The PATT estimator performs badly when the compliance rate is low, whereas the PATTC estimator is comparatively insensitive to changes in the compliance rate. A similar pattern emerges when the compliance rate varies with the population treatment rate (Figure A1).
Figure 3 compares the RMSE of PATT and PATTC with the SATE at varying levels of compliance in the total population. PATTC is relatively invariant to changes in the compliance rate and outperforms both PATT and SATE in terms of minimizing RMSE when the compliance rate is below 70%. For high levels of compliance, the SATE tends to estimate the average causal effects for the target population as closely than as PATT or PATTC.
Figures A2, A3, and A4 explore how the degrees of confounding in the mechanisms that determine sample selection, treatment assignment, and compliance affect estimation error. PATTC tends to be invariant to increases in the degree of confounding, whereas PATT is sensitive to confounding in the sample selection mechanism. The SATE estimates are generally more variable than the population estimates due to the sample estimator’s inability to account for differences in pretreatment covariates between the RCT sample and target population.
5 Application: Medicaid and health care use
We apply the proposed estimator to measure the effect of Medicaid coverage on health care use for a target population of adults who may benefit from expansions to the Medicaid program. In particular, we examine the population of nonelderly adults in the U.S. with household incomes at or below 138% of the Federal Poverty Level (FPL) — which amounts to $32,913 for a four–person household in 2014 — who may be eligible for Medicaid following the Affordable Care Act (ACA) expansion.
5.1 RCT sample
We draw RCT data from the Oregon Health Insurance Experiment (OHIE) (Finkelstein et al., 2012; Baicker et al., 2013; Baicker et al., 2014; Taubman et al., 2014). In 2008, approximately 90,000 uninsured lowincome adults participated in the OHIE to receive Medicaid benefits.^{11}^{11}11Eligible participants include Oregon residents (US citizens or legal immigrants) aged 19 to 64 not otherwise eligible for public insurance, who have been without insurance for six months, and have income below the FPL and assets below $2,000. Treatment occurred at the household level: participants selected by the lottery won the opportunity for themselves and any household member to apply for Medicaid. Within a sample of 74,922 individuals representing 66,385 households, 29,834 participants were selected by the lottery; the remaining 45,008 participants served as controls in the experiment. Participants in selected households received benefits if they returned an enrollment application within 45 days of receipt. Among participants in selected households, about 60% mailed back applications and only 30% successfully enrolled.^{12}^{12}12About half of the returned applications were deemed ineligible, primarily due to failure to demonstrate income below the FPL. Enrolled participants were required to recertify their eligibility status every six months.
The response data originate from a mail survey that was administered to participants over July and August 2009 ( survey respondents). We use the same definition of insurance coverage as Finkelstein et al. (2012)
to form the measure of compliance, which is a binary variable indicating whether the participant was enrolled in any Medicaid program during the study period. The OHIE data include pretreatment covariates for gender, age, race, ethnicity, health status, education, and household income.
The outcomes of interest are binary variables for any emergency room (ER) and outpatient visits in the past 12 months. ER use is an important outcome because it is the main delivery system through which the the uninsured receive health care. The uninsured could potentially receive higher quality and less affordable health care through outpatient visits. An important question for policymakers is whether Medicaid expansions will decrease ER utilization by the previously uninsured.
Subsequent research calls in to question the external validity of the OHIE, which resulted in the counterintuitive finding that Medicaid increased ER use among RCT participants (Finkelstein et al., 2012; Taubman et al., 2014). For example, quasiexperimental studies on the impact of the 2006 Massachusetts health reform — which served as a model for the ACA — show that ER use decreased or remained constant following the reform (Miller, 2012; Kolstad and Kowalski, 2012). A challenge to the external validity of the OHIE is that it’s exclusion criteria was likely more restrictive than government health insurance expansions.
5.2 Observational data
We acquire data on the target population from the National Health Interview Study (NHIS) for years 2008 to 2017.^{13}^{13}13A possible limitation of this application is that it ignores the complex sampling techniques of the NHIS sample design such as differential sampling, which is discussed in detail in Parsons et al. (2014). We restrict the sample to respondents with income below 138% of the FPL and who are uninsured or on Medicaid and select covariates on respondent characteristics that match the OHIE pretreatment covariates. The outcomes of interest from NHIS are variables on ER and outpatient visits in the past 12 months. We use a recoded variable that indicates whether respondents are on Medicaid as an analogue to the OHIE compliance measure.
5.3 Verifying assumptions
In order for to be identified, Assumptions 2.1 – 2.1 must be met. Assumption 2.1 ensures that potential outcomes for participants in the target population would be identical to their outcomes in the RCT if they had been randomly assigned their observed treatment. In the empirical application, Medicaid coverage for uninsured individuals was applied in the same manner in the RCT as it is in the population. Differences in potential outcomes due to sample selection might arise, however, if there are differences in the mail surveys used to elicit health care use responses between the RCT and the nonrandomized study.
We cannot directly test Assumptions 2.1 and 2.1, which state that potential outcomes for treatment and control are independent of sample assignment for individuals with the same covariates and assignment to treatment. The assumptions are only met if every possible confounder associated with the response and the sample assignment is accounted for. In estimating the response surface, we use all demographic, socioeconomic, and preexisting health condition data that were common in the OHIE and NHIS data. Potentially important unobserved confounders include the number of hospital and outpatient visits in the previous year, proximity to health services, and enrollment in other federal programs.
The final two columns of Table A1 compares RCT participants selected for Medicaid with population members on Medicaid. Compared to the RCT compliers, the target population “compliers” are predominantly female, younger, more racially and ethnically diverse, less educated, and live in higher income households. Diagnoses of diabetes, asthma, high blood pressure, and heart disease are more common among the population on Medicaid then the RCT treated.
Strong ignorability assumptions may also be violated due to the fact that the OHIE applied a more stringent exclusion criteria compared to the NHIS sample. While the RCT and population sample both screened for individuals below the FPL, only the RCT required those enrolled to recertify their eligibility status every six months.
A violation of nointerference (Assumption 2.1) biases the estimate of if, for instance, treated participants’ Medicaid coverage makes control participants more likely to visit the ER. Interference is less likely in this experimental set–up because treatment occurs at the household level.
Assumption 2.1 is violated if assignment to treatment influences the compliance status of individuals with the same covariates. The compliance ensemble can accurately classify compliance status for 77% of treated RCT participants with only the covariates — and not treatment assignment — as model inputs.^{14}^{14}14The compliance ensemble is evaluated in terms of 10–fold cross–validated MSE. The distribution of MSE for the ensemble and its candidate algorithms are provided in Table A5. This gives evidence in favor of the conditional independence assumption.
The exclusion restriction (Assumption 2.1) ensures treatment assignment affects the response only through enrollment in Medicaid. It is reasonable that a person’s enrollment in Medicaid, not just their eligibility to enroll, would affect their hospital use. For private health insurance one might argue that eligibility may be be negatively correlated with hospital use, as people with preexisting conditions are less often eligible yet go to the hospital more frequently. This should not be the case with a federally funded program such as Medicaid.
5.3.1 Placebo tests
Similar to the procedure proposed by (Hartman et al., 2015), we conduct placebo tests to check whether the average outcomes differ between the RCT compliers on Medicaid and the adjusted population “compliers” on Medicaid.^{15}^{15}15Note that a placebo test for Assumption 2.1 is not possible because we never observe whether RCT controls would actually takeup treatment if assigned. If the placebo tests detect a significant difference between the mean outcomes of these groups, it would indicate that either Assumption 2.1 (for ), or Assumptions 2.1 and 2.1 are violated.
Table A3 reports the results of placebo tests, comparing the mean outcomes of RCT compliers against the mean outcomes of adjusted population “compliers.” The former quantity is calculated from the observed RCT sample and the latter quantity is the mean counterfactual estimated from 4 of the estimation procedure. Tests of equivalence between the two groups indicate that the differences across each outcome are not statistically significant. These results imply that the PATTC estimator is not biased by differences in how Medicaid is delivered or health outcomes are measured between the RCT and population, or by differences in sample or population members’ unobserved characteristics.
5.3.2 Sensitivity to no defiers assumption
Angrist et al. (1996) show that the bias due to violations of Assumption 2.1 is equivalent to the difference of average causal effects of treatment received for compliers and defiers, multiplied by the relative proportion of defiers,
Table A2 reports the distribution of participants in the OHIE by status of treatment assignment and treatment received. Assumption 2.1 does not hold due to the presence of defiers; i.e., participants who were assigned to control and enrolled in Medicaid during the study period. About 6.7% of the RCT sample were assigned to control but were enrolled in Medicaid () and 65.5% of the sample complied with treatment assignment (), which results in a bias multiplier of 0.11. Suppose that the difference of average causal effects of Medicaid received on ER use for compliers and defiers is 1.2%. The resulting bias is only 0.1%, which would not meaningfully alter the interpretation of the SATE or PATTC estimates reported below.
5.4 Empirical results
We compare PATTC and PATT estimates for ER and outpatient use. We obtain estimates for the overall group of participants and subgroups according to sex, age, race, health status, education, and household income. Subgroup treatment effects are estimated by taking differences across response surfaces for a given covariate subgroup, and response surfaces are estimated with the ensemble mean predictions. We use treatment received, number of household members, and the subgroup covariates as features in the response models. We generate 95% confidence intervals for these estimates using 1,000 bootstrap samples.
Table A4 presents the PATTC estimates, which indicate that Medicaid coverage has a positive, but considerably smaller effect on the number of ER and outpatient visits. For comparison, Finkelstein et al. (2012) reports population estimates of the effect of Medicaid coverage on the number of ER and outpatient visits using 2004–2009 NHIS data on adults aged 19–64 below 100 percent of the federal poverty line (). Finkelstein et al. (2012) estimates Medicaid coverage significantly increases the number of ER visits by 0.08 [0.05, 0.12] and increases the number of outpatient visits by 1.45 [1.33, 1.57].
Figures A5, A6, and A7 examine heterogeneous treatment effect estimates on ER and outpatient use in the population. While this study is the first to our knowledge to estimate heterogeniety in treatment effects for the target population, Taubman et al. (2014) and Kowalski (2016) perform subgroup analyses on the RCT sample. Similar to the PATTC estimates, Taubman et al.’s [2014] subgroup analyses indicate that increases in ER use due to Medicaid are significantly larger for younger individuals and those with high schoollevel education.^{16}^{16}16Kowalski (2016) perform subgroup analyses on OHIE sample data and find larger increases in ER use as a result of Medicaid for men, English speakers, and individuals enrolled in a food stamp program prior to the lottery.
6 Discussion
The simulation results presented in Section 4 show that the PATTC estimator outperforms its unadjusted counterpart when the compliance rate is low. Of course, the simulation results depend on the particular way we parameterized the compliance, selection, treatment assignment, and response schedules.
In particular, the strength of correlation between the covariates and compliance governs how well the estimator will perform, since 1 of the estimation procedure is to predict who would be a complier in the RCT control group, had they been assigned to treatment. If it is difficult to predict compliance using the observed covariates, then the estimator will perform badly because of noise introduced by incorrectly treating noncompliers as compliers. Further research should be done into ways to test how well the model of compliance works in the population or explore models to more accurately predict compliance in RCTs. Accurately predicting compliance is not only essential for yielding unbiased estimates of the average causal effects for target populations, it is also useful for researchers and policymakers to know which groups of individuals are unlikely to comply with treatment.
In the OHIE trial, only about of those selected to receive Medicaid benefits actually enrolled. The compliance ensemble accurately classified compliance status for 77% of treated RCT participants using only the pretreatment covariates as features. While we don’t know how well the compliance ensemble predicts for the control group, the control group should be similar to the treatment group on pretreatment covariates because of the RCT randomization. The model’s performance on the training set suggests that compliance is not purely random and depends on observed covariates. This gives evidence in favor of using the proposed estimator.
In the empirical application, the sample population differs in several dimensions from the target population of individuals who will be covered by other Medicaid expansions, such as the ACA expansion to cover all adults up to 138% of the FPL. For instance, the RCT participants are disproportionately white urban–dwellers (Taubman et al., 2014). The RCT participants volunteered for the study and therefore may be in poorer health compared to the target population. These differences in baseline covariates make reweighting or response surface methods necessary to extend the RCT results to the population.
Explicitly modeling compliance allows us to decompose population estimates by subgroup according to pretreatment covariates common to both RCT and observational datasets; e.g, demographic variables, pre–existing conditions, and insurance coverage. We find substantial differences between sample and population estimates in terms of race, education, and health status subgroups. This pattern is expected because RCT participants volunteered for the study and are predominately white and educated.
References
 Angrist et al. (1996) Angrist, J. D., G. W. Imbens, and D. B. Rubin (1996, June). Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434), 444–455.
 Baicker et al. (2014) Baicker, K., A. Finkelstein, J. Song, and S. Taubman (2014). The impact of Medicaid on labor market activity and program participation: Evidence from the Oregon health insurance experiment. The American Economic Review 104(5), 322.
 Baicker et al. (2013) Baicker, K., S. L. Taubman, H. L. Allen, M. Bernstein, J. H. Gruber, J. P. Newhouse, E. C. Schneider, B. J. Wright, A. M. Zaslavsky, and A. N. Finkelstein (2013). The Oregon experiment — effects of Medicaid on clinical outcomes. New England Journal of Medicine 368(18), 1713–1722.
 Breiman (2001) Breiman, L. (2001). Random forests. Machine Learning 45(1), 5–32.
 Buja et al. (1989) Buja, A., T. Hastie, and R. Tibshirani (1989). Linear smoothers and additive models. The Annals of Statistics, 453–510.
 Finkelstein et al. (2012) Finkelstein, A., S. Taubman, B. Wright, M. Bernstein, J. Gruber, J. P. Newhouse, and H. Allen (2012). The Oregon health insurance experiment: Evidence from the first year. The Quarterly Journal of Economics 127(3), 1057.
 Friedman (2001) Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics, 1189–1232.
 Hartman et al. (2015) Hartman, E., R. Grieve, R. Ramsahai, and J. S. Sekhon (2015). From SATE to PATT: Combining experimental with observational studies to estimate population treatment effects. Journal of the Royal Statistical Society: Series A (Statistics in Society) 10, 1111.
 Imai et al. (2008) Imai, K., G. King, and E. Stuart (2008). Misunderstandings among experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society, Series A 171, part 2, 481–502.
 Imai et al. (2013) Imai, K., D. Tingley, and T. Yamamoto (2013). Experimental designs for identifying causal mechanisms. Journal of the Royal Statistical Society: Series A (Statistics in Society) 176(1), 5–51.
 Kolstad and Kowalski (2012) Kolstad, J. T. and A. E. Kowalski (2012). The impact of health care reform on hospital and preventive care: Evidence from Massachusetts. Journal of Public Economics 96(1112), 909–929.
 Kowalski (2016) Kowalski, A. E. (2016, June). Doing more when you’re running LATE: Applying marginal treatment effect methods to examine treatment effect heterogeneity in experiments. Working Paper 22363, National Bureau of Economic Research.
 Miller (2012) Miller, S. (2012). The effect of insurance on emergency room visits: an analysis of the 2006 Massachusetts health reform. Journal of Public Economics 96(1112), 893–908.
 Parsons et al. (2014) Parsons, V. L., C. L. Moriarity, K. Jonas, T. F. Moore, K. E. Davis, and L. Tompkins (2014). Design and estimation for the National Health Interview Survey, 2006–2015. National Center for Health Statistics. Vital and Health Statistics 2(165), 1–53.
 Polley and Van Der Laan (2010) Polley, E. C. and M. J. Van Der Laan (2010). Super learner in prediction. Working Paper 266, Division of Biostatistics, University of California, Berkeley.
 Rubin (1990) Rubin, D. B. (1990). Comment: Neyman (1923) and causal inference in experiments and observational studies. Statistical Science 5(4), 472–480.
 Stuart et al. (2011) Stuart, E. A., S. R. Cole, C. P. Bradshaw, and P. J. Leaf (2011). The use of propensity scores to assess the generalizability of results from randomized trials. Journal of the Royal Statistical Society: Series A (Statistics in Society) 174(2), 369–386.
 Taubman et al. (2014) Taubman, S., H. Allen, B. Wright, K. Baicker, and A. Finkelstein (2014, January). Medicaid increases emergency department use: Evidence from Oregon’s health insurance experiment. Science 343(6168), 263–268.
 Tibshirani et al. (2012) Tibshirani, R., J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani (2012). Strong rules for discarding predictors in lassotype problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 74(2), 245–266.
 van der Laan et al. (2007) van der Laan, M. J., E. C. Polley, and A. E. Hubbard (2007). Super learner. Statistical Applications in Genetics and Molecular Biology 6(1).
Comments
There are no comments yet.