Generalized propensity score approach to causal inference with spatial interference

06/30/2020 ∙ by Andrew Giffin, et al. ∙ NC State University 0

Many spatial phenomena exhibit treatment interference where treatments at one location may affect the response at other locations. Because interference violates the stable unit treatment value assumption, standard methods for causal inference do not apply. We propose a new causal framework to recover direct and spill-over effects in the presence of spatial interference, taking into account that treatments at nearby locations are more influential than treatments at locations further apart. Under the no unmeasured confounding assumption, we show that a generalized propensity score is sufficient to remove all measured confounding. To reduce dimensionality issues, we propose a Bayesian spline-based regression model accounting for a sufficient set of variables for the generalized propensity score. A simulation study demonstrates the accuracy and coverage properties. We apply the method to estimate the causal effect of wildland fires on air pollution in the Western United States over 2005–2018.



There are no comments yet.


page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Understanding spatial processes in the environmental and health sciences has taken on new importance as we grapple with emerging ecological and epidemiological issues. Much of the research in these areas are associative in nature despite the effects of interests being causal (Bind, 2019). This is a result of both the frequent necessity of using observational data, but also the difficulty of implementing causal inference tools on data that exhibit spatial dependence and, in particular, interference. Interference is the phenomenon in which treatments at one location may affect the response at other locations. Naturally, with spatially-dependent processes, a treatment may impact the response nearby, leading to interference.

An example of spatial interference is the relationship between wildland fires and air pollution. Treating wildland fires as the treatment and pollution as the response, it is clear that the treatment can substantially impact the response at the location of treatment and at distant locations. In this example all available data are observational, and therefore isolating average causal treatment effects requires accounting for confounding variables. Even in the ideal case where all potential confounders are observed across locations, it is unclear how to condition on these confounders without knowing their specific spatial relationships with the treatment and response. Conditioning on confounders at all locations, which is one way around this, is impractical for all but the smallest studies.

The difficulty that arises from interference in the context of spatially dependant processes is immediately apparent from the vantage of the potential outcomes framework developed by Rubin (1974). For a binary treatment without interference, there are two unit-level potential outcomes to consider. Under general treatment interference, there are unit-level potential outcomes to consider, where is the total number of units, because each treatment permutation across all units represents a distinct treatment. In the case of geostatistical models that contain uncountably many spatial locations, the problem becomes even more intractable. For this reason, beginning with Cox (1958) much of the causal inference literature assumes away interference. The no-interference assumption is now usually invoked as one-half of the ubiquitous stable unit treatment value assumption (Rubin, 1980).

Relaxations to the no-interference assumption generally involve placing assumptions on the form of interference. Partial interference, a term coined by Sobel (2006), was the first relaxation developed, specifically for modeling vaccination treatments which are known to induce herd immunity. This assumption defines disjoint groups or clusters a priori which may exhibit interference, but precludes interference between groups. This form of interference was originally considered with experimental data by Halloran and Struchiner (1991, 1995), but expanded to non-randomized data by Hudgens and Halloran (2008); Tchetgen and VanderWeele (2012); Liu and Hudgens (2014); Papadogeorgou et al. (2019). The dual nature of this form of interference allows for information on both the direct treatment effects as well as the indirect or spill-over effects from interference. Additionally, the deluge of network data has resulted in a literature which allows for interference along edges of a pre-specified graph (Athey et al., 2018).

Spatially indexed data have been analyzed using both the partial interference and network interference strategies. For naturally clustered spatial data, the partial interference assumption can be used, e.g., as in Perez-Heydrich et al. (2014) and Zigler et al. (2012). Spatial data can also be simplified to the network setting. For areal data, this often entails creating a graph with edges between neighboring units, as in Verbitsky-Savitz and Raudenbush (2012). This, however, discards information about the distance between units.

Despite these advances, there has been little exploration of strictly spatial assumptions on the form of interference. To fill this gap in the literature, we propose a new framework to recover causal direct and spill-over effects in the presence of spatial interference, while taking into account the high dimensionality of the problem. We develop a generalized propensity score to account for spatial dependence in the distribution of treatment. To further reduce the size of the problem, we propose a model which accounts for a sufficient set of summary variables rather than the full generalized propensity score itself.

The proposed approach has a number of advantages over using a partial interference or network interference assumption. The partial interference assumption is only reasonable for limited cases when the data naturally cluster a significant distance apart. Moreover, the partial interference grouping must be specified a priori. The network interference assumption, while more flexible, abandons key spatial information about the distance between points, which may be crucial in the presence of true spatial confounding. Our proposed method retains all spatial information, and allows for the kernel range to be estimated concurrently.

2 Potential outcomes, interference, and identification

Assume that data are available at spatial locations . For spatial location define as the relevant covariates and the response. We will consider both real-valued and binary treatments . We use subscript

to refer to the full fields of random variables, e.g.,

. Variables with subscript denote all locations in excluding . Lowercase letters refer to realizations of the variables.

Without restrictions, the response is potentially a function of and at all locations, greatly increasing the number of potential outcomes. To make this manageable while still taking spatial interference into account, we assume that the potential outcome depends on treatment field through two mechanisms; a direct treatment, , and an indirect/spill-over treatment, , where is a kernel function with bandwidth . This constitutes a general class of interference structures. Examples 1 and 2 provide two important cases.

Example 1

For clustered data, implies partial interference when the clusters are smaller than in diameter and separated by at least . Here the potential outcome exhibits stratified interference or anonymous interaction (Manski, 2013); i.e., depends on its own treatment and the aggregate treatment of other locations in its cluster.

Example 2

When takes this Gaussian kernel form with bandwidth , interference decays smoothly over space.

Because only finitely many locations are observed in practice, the integral form of must be approximated with a sum. One approach is to assume that are the average treatments over regions that partition . This is a tractable approach that is particularly useful for binary treatments. Another more general approach is to treat as a smooth function that can be well approximated by summing over locations. In this paper we focus on the former, and approximate with the form .

Implicitly, we assume that for any and treatments and , if and . This simplified treatment allows us to parsimoniously define the individual potential outcomes for all possible treatment fields in terms of only the local direct and spill-over treatments: .

Identification of the treatments effects follows from the following assumptions:

Assumption 1 (Unconfoundedness)

For all , .

Assumption 2 (Positivity)

For all with , for all .

Assumption 3 (Consistency)

The potential outcome when and .

For finite , with only the assumptions above, treatments effects theoretically are identifiable. However, identification requires the number of repeated field observations to be at least , which is rare. To make the situation tractable, we make two additional assumptions about our data as follows:

Assumption 4 (Marginal Structural Model)

The potential outcomes model take the form


where is a general function of , and is an error process that is independent of and . Here and quantify the direct and spill-over effects of treatment, respectively; quantifies the range of the spill-over effect .

Under Assumptions 14, (1) is identifiable in the sense that


The first equality follows from Assumptions 1 and 2. The second follows from Assumptions 3 and 4.

It is instructive to consider the dependence that is created by these assumptions. is unrestricted, and is therefore plausibly spatially correlated. Because the direct treatment mechanism is a function of , will likely reflect any spatial structure in . may reflect both general spatial dependence from as well as any induced spatial dependence from .

3 The generalized propensity score is a balancing score

The identification formula (2) implies that we can estimate , , and using the regression model

if is known and is a mean zero error process. In most cases, though, is not known. The standard causal inference strategy at this point is to condition on itself, if known. However, even when is known, in the context of spatial analysis it is high dimensional. Specifically, for unit it does not suffice to condition on , but rather requires conditioning on at all locations. With both high-dimensional confounders as well as our assumptions about the treatment mechanism, the natural path forward is to condition on the propensity of treatment (Rosenbaum and Rubin, 1983).

In a setting without interference, and thus only direct treatment effects, the standard propensity score for binary treatments is defined as . This is easily extended to real valued treatments using the form . In both cases, simply summarizes the conditional distribution of treatment. The propensity score is an example of a balancing score: a function of the covariates that, once conditioned on, induces independence between the treatment and covariates. If all confounders are included in , then , rather than , may be conditioned on for unbiased treatment effects. When is high-dimensional, as in our motivating example, this is a substantial dimension reduction.

Under interference, with treatment components and , the propensity score approach can still be utilized, by defining the propensity of treatment to be a summary of the conditional distribution of . To this end, we define to be the joint propensity of and :


We refer to the bivariate density function as the generalized propensity score. Importantly, this general form of allows for treatments to be correlated, which in turn may cause dependence between and .

The key insight is that is a balancing score. This implies that, paired with our no unmeasured confounders assumption, the observed treatments and potential outcomes are independent conditional on

. This is the strategy which we use to recover unbiased estimates of our key coefficients

and . Theorem 1 shows this formally, by extending the analogous result for propensity scores for continuous treatments by Hirano and Imbens (2004) to our generalized propensity score .

Theorem 1 ( is a balancing score)

Given Assumptions 14, then for all locations and spill-over treatment levels ,

The proof is provided in the Appendix A.

By Theorem 1 it suffices to adjust for to remove confounding bias. Namely, Theorem 1 implies that

This suggests that we can adjust for potential confounding by incorporating into the regression model.

4 Modeling the generalized propensity score

Estimating is difficult. It is a bivariate distribution function over , and non-parametric estimation of even univariate density functions suffers from dimensionality issues. To overcome this, we make the following dimension reduction assumption.

Assumption 5 ( is a parametric distribution)

is a bivariate parametric density with parameters that are a functions of and .

That is, the distribution of can be completely summarized by low-dimensional parameters .

Example 3

If are independent and Gaussian then is itself Gaussian. Setting

to be the mean and variance of both

and completely summarizes its distribution.

Corollary 1

Given Assumptions 15, then for all locations

This follows immediately from Theorem 1.

This states that conditioning on is equivalent to conditioning directly on the distribution , and so our Theorem 1 result of unconfoundedness given extends to the considerably more tractable situation of unconfoundedness given . Identification of and follows from the conditional independence in Corollary 1, as shown in (4). Equation (5) sketches the manner in which the components of will be conditioned on using B-splines. Let denote true values; variables without being estimated values. Based on


we must have and . We use splines to allow for an arbitrary form of dependence between and , and include them directly in the regression:


The second line of (5

) implicitly assumes that the spline components enter additively, an assumption which can be tested. In the presence of non-additivity, a tensor product of the components should be used which allows for general interactions at considerable computational cost

(Wood, 2006).

5 Bayesian inference and computational algorithm

The identification results (4) and (5) allow unbiased estimation of and using a regression of the observed response onto the direct and spill-over treatments as well as the spline estimates of . Implementing this involves three steps: Step 1 parametrizes and estimates the propensities of direct and spill-over treatment. Step 2 estimates a preliminary posterior for the range parameter , which must be done in a separate step for reasons discussed below. Step 3 estimates final posterior distributions for all parameters in (7

) via Markov chain Monte Carlo sampling.

The propensities of direct treatment that are tackled in Step 1 are first estimated by regressing onto . This requires parametrizing the form of , and identifying a correctly specified propensity score. The form of this score can vary in complexity. The simplest case is that of a local treatment assignment mechanism, i.e., the distribution of is influenced by only. This would simply entail a regression on local covariates. A moderately complex case would allow for nearby to inform the propensity of treatment. A very general case would allow to be spatially-dependent, conditional on . That is, would depend directly on nearby .

Estimating the spill-over propensity component of Step 1 is similar. First, a family of parametric distributions must be identified. One intuitive method of doing this is to select several candidate distributions based on the form of , and select among them by simulating values of . For example, if is binary, then the potential candidates for the distribution of

must be nonnegative and allow for point mass at zero. Obvious contenders are zero-inflated lognormal and zero-inflated Gamma distributions. A natural way to select between them is to simulate from the estimated propensities of

, to get simulated values using different reasonable . The empirical distributions of these simulated will often suggest one family of distributions. With a chosen distribution in hand, the parameters at each location can be estimated directly from the field and

. Because these parameters will be conditioned on by entering into a splined regression, it is advantageous that their values have reasonable spread. To this end, one-to-one transformations of the parameters such as log and logit are helpful.

Step 2 involves identifying a plausible set of values to be used in Step 3. Because represents a propensity score, estimating directly in the final model is problematic. It is clear from the definition of a propensity score that the response should not provide any information on the propensity of treatment. However, estimating a response model such as (6) which includes directly does just that, since can influence through . This problem is articulated in McCandless et al. (2010); Saarela et al. (2015, 2016); Zigler et al. (2013); and Zigler (2016). While steps can be taken to mitigate feedback from to issues remain.

Our solution to this issue takes inspiration from the standard two-step propensity score treatment in which propensity scores are first estimated and treated as fixed, and then conditioned on in an outcome model. Because is unknown, estimating in advance is impossible. However, estimating the model with feedback in (6) does give approximate estimates of . From this approximate posterior of , a set of reasonable values covering the plausible range of can be identified. Then can be pre-computed and conditioned on simultaneously in the response model in Step 3. Because each of these are computed before the response model, the feedback issue is resolved.

Therefore in Step 2 we estimate


where is distributed independent Normal. An attempt to cut the feedback from to is made by estimating in the Metropolis step using only while holding fixed. A recommended plausible set for might then be , where and

are the posterior mean and standard deviation of

in (6).

Finally in Step 3 each fixed enters the final model as


This model produces accurate posteriors on all variables. Although each each is fixed within the terms, can still vary within . For the spline terms in (6)–(7), we use B-spline expansions taken at fixed intervals over the variables’ range of values (Eilers and Marx, 1996; Ngo and Wand, 2004). All regression coefficients are estimated using Gibbs sampling; , which now enters only through , uses a Metropolis step. If Assumptions 15 hold, we recover unbiased estimate of the treatment effects. Comparing the forms of the assumed true model (1) and the estimated model (7) shows that we have essentially replaced the unknown with flexible functions of .

6 Simulation study

We examine the performance of this method using simulated data, which take inspiration from the wildfire/air pollution data in Section 7. Since we use a binary treatment in Section 7 to indicate the presence of a fire, we use here. In addition, we assume at different locations is independent conditional on local . This precludes the more complex cases of independence conditional on or conditional dependence. Doing this allows for more straightforward modeling of , as shown in 6.1.

We generate the data as follows. The fields , , and are generated on grids, with on the unit square . We generate independent repeated observations of the fields for each dataset. Thus each complete dataset involves different data points. The single covariate is a mean zero, variance one, Gaussian process and with isotropic exponential covariance and spatial range 0.6. The binary direct treatment is determined locally and distributed independently . The continuous spill-over treatment takes the form , with a Gaussian kernel as defined in Example 2 and . Several versions of the confounder are generated as follows: a weighted average is taken of the values using a Gaussian kernel with and weights normalized to sum to 1. Simulations are run with set to , , and . Lastly, follows the form of (1), with , , and independently distributed standard normal. Each setting is repeated 500 times.

6.1 Estimation

Following the three steps outlined in Section 5, we first parametrize and estimate . Because is assumed to be conditionally independent given , we can estimate components for the distributions of and separately.

is binary, so we assume it has a Bernoulli distribution with the correctly specified propensity in which

is affine in . Its distribution is then captured with the standard propensity score

. These values can be estimated with a simple logistic regression from

onto , with set to the log of the fitted values.

We then we choose a parametric form for the distribution of . From our estimated , we use different plausible values to generate simulated A, which we then use to get an empirical distribution of simulated . Examination of these distributions leads us to choose a zero-inflated lognormal distribution for :

Rather than use the three parameters , , and for our , , and , we choose three more stable one-to-one transformations: , , and .

In place of Step 2 the values used are , which surround but do not contain the true . Rather than re-estimate these values with each simulation repetition, we use this set to ensure comparability across repetitions. Finally, Step 3 uses Markov chain Monte Carlo to estimate all variables in (7). Further estimation details are provided in Appendix B.

In addition to the proposed generalized propensity score model, we estimate three comparison models: (i) the oracle model [] is the true model which includes otherwise unknown as a covariate, (ii) the local only model [] conditions on local covariates using splines, and (iii) the naive model [] simply regresses the outcome onto the treatments, but does not incorporate any causal conditioning.

6.2 Simulation results

Tables 1 and 2 show the simulation bias and coverage for the grids. The Naive model does very poorly in all scenarios, indicating substantial confounding between and . The generalized propensity score model performs substantially better than both the Local only and the Naive models, although, intuitively, the Local only model does show reasonable direct effect estimates. In most cases, the generalized propensity score model performs comparably to the Oracle model. Results for the grids are similar.

Oracle 0.2 (1.8) -0.7 (0.9) 0.1 (0.2)
Generalized propensity score 1.4 (1.9) 0.5 (1) 0 (0.2)
Local Only 2.6 (2) -72.1 (1.1) 69.1 (0.4)
Naive 236.7 (1.9) 52.7 (1.5) 79.2 (0.5)
Oracle 0.1 (1.8) -0.7 (0.9) 0.1 (0.2)
Generalized propensity score 1.1 (1.9) -0.1 (1) 0.1 (0.2)
Local Only 1 (2.1) 28.5 (1.3) -39.7 (0.4)
Naive -205.9 (2.8) -104.1 (1.6) -53.1 (0.6)
Oracle 0.3 (1.8) -0.6 (0.9) 0.1 (0.2)
Generalized propensity score 1.5 (1.9) 0.9 (1.1) 0.1 (0.3)
Local Only 2.8 (2.3) -101.4 (2.1) 120.7 (1.4)
Naive 381.3 (2.8) 105.3 (2.3) 114.2 (1.3)
Table 1: Simulation bias for

grids multiplied by 1,000, with standard errors

Oracle 95 (1) 94.6 (1) 93.6 (1.1)
Generalized propensity score 93.8 (1.1) 94.4 (1) 93.6 (1.1)
Local Only 93.2 (1.1) 8.8 (1.3) 0 (0)
Naive 0 (0) 27.6 (2) 0 (0)
Oracle 95.2 (1) 94.6 (1) 93.8 (1.1)
Generalized propensity score 95.2 (1) 93.8 (1.1) 92.8 (1.2)
Local Only 93.8 (1.1) 77.2 (1.9) 0 (0)
Naive 2 (0.6) 6 (1.1) 0 (0)
Oracle 95.4 (0.9) 94.2 (1) 93.8 (1.1)
Generalized propensity score 93 (1.1) 90.2 (1.3) 90.2 (1.3)
Local Only 90.8 (1.3) 4.6 (0.9) 0 (0)
Naive 0 (0) 8.2 (1.2) 0 (0)
Table 2: Simulation coverage for grids, with standard errors

7 Estimating the causal effect of wildland fires on air pollution

Wildland fires release harmful particles and gasses impacting air quality near the fire and downwind (Larsen et al., 2018). Fine particulate matter smaller than 2.5 m (PM) have been linked to adverse cardiorespiratory health outcomes (Brook, 2007; Dominici et al., 2006; Corrigan et al., 2018; Rappold et al., 2012; Weber et al., 2016). For these reasons, understanding the causal effect of wildland fires on air pollution across space is of significant interest. Specifically, we are interested in the time-averaged causal effect of wildfires on ambient PM concentrations across Western United States from 2005 to 2018.

7.1 Data

The response consists of 24-hour average PM concentrations measured in g/m at 416 measurement sites, some of which are plotted in Fig. 1. Observations are collected every one, three or six days depending on the station. The data are publicly available and provided by the Environmental Protection Agency. For each location, the long-term mean is subtracted.

The dates and locations of fires are compiled from a mix of satellite data and incident reports reported to the Geospatial Multi-Agency Coordination Wildland Fire Support program. Because the focus of our analysis is on PM

only fires larger than 1,000 acres are included in the analysis. Among the 3,930 fires, 34.8% of fires are missing either a start or end date. For these fires we impute missing values by modeling fire duration as a linear function of log(area burned).

Lastly, 11 confounders are included in the treatment balancing score. These include the four components of the National Fire Danger Rating System (energy release component, burning index, ignition component, and spread index) which are used to monitor daily risk of fire in the United States. The other variables used in the balancing score include elevation, daily temperature, relative humidity, wind speed, precipitation level, the Keetch-Byram drought index, and the numeric day of the year. A snapshot of the treatment, response, as well as the energy release component, ignition component, Keetch-Byram drought index, and relative humidity for one day are shown in Figure 1.

Our analysis treats each daily air observation as the center of a grid, with a height and width of 9 degrees latitude/longitude. For each such grid, only the center grid cell has a response value. However, all 81 grid cells have covariates and direct treatment values. Each grid cell receives direct treatment if there was at least one fire in the cell on that particular day; 0 otherwise. Each value is taken to be the mean of the observed covariates in each cell/day combination. For cell/days with no observed values, a value is imputed from nearby cells using a kernel smoother as implemented in the “fields” R package (Nychka et al., 2014). The end result is 605,414 observed grids, each of which contain grids for and , as well as a centered value. Finally, any grid cells whose centers extend outside of the Western United States are disregarded and excluded from analysis. In this context, the direct effect of treatment consists of the causal effect on from a fire in the same grid cell (), whereas the indirect effect consists of the causal effect on from in other cells (). As in the simulation study, each of these grids are treated as independent. In addition to the generalized propensity score model, we estimate a model that conditions on the local covariates only, using splines.

We use the same form of as given in Section 6. at different locations are assumed to be conditionally independent given , which allows us to estimate separate components for and . The propensity component is estimated as a linear model of 5-element B-splines of , and the propensity of is assumed to be zero-inflated lognormal. Conditioning on local only is justified because we posit that it is the local that contains the vast majority of information about the propensity of fire, with locations further away giving far less information.

(a) Fires () and PM ()
(b) Four of the 10 covariates
Figure 1: Data snapshot on July 1, 2012. Energy Release Component (ERC) and Ignition Component (IC) are two of National Fire Danger Rating System Components; KBDI refers to the Keetch-Byram drought index. In (a) fires are shown as cross-hatched circles and PM locations are shown as solid circles

7.2 Results

Table 3 shows the results. The causal direct effect estimate given by the generalized propensity score model is 1.03 g/m of PM, or 11.9% of the annual mean PM observed throughout. The range parameter is estimated to be 1.53 degrees of latitude/longitude, suggesting that fires impact up to roughly 3 degrees away. The estimate of 0.13 for represents the height of spill-over kernel at its peak. All of , , and are highly significant. The estimated direct effect from the local-only model is 12% larger than the estimate from the generalized propensity score model, and the local-only model has an implausibly large and imprecise estimate of .

Figure 2 illustrates the implied causal effect of fire at different distances from the generalized propensity score model. Taking the center of a grid cell as our vantage point, the direct effect of one or more fires in the same grid cell has a time-averaged causal increase of 1.03 g/m of PM, which corresponds to the step from 0 to 0.5 in the east/west or north/south direction; slightly more than 0.5 when at an angle. As the fire gets progressively further away, the causal effect decays smoothly until it approaches 0 roughly 3 grid cells away. Intuitively this kernel extending out from 0 is completely determined by and : corresponds to the width of the kernel; is the height of the kernel at its peak.

The wildfire analysis makes several simplifications that are important to consider. First, treating as binary sacrifices information on the number and size of fires in a given grid cell. Extending this method to incorporate information on the size of the fire would preserve information. Moreover, we assume , , and are fixed, although it is possible that they naturally vary across different fires and locations. However, there is not enough information in the data to identify these differences. Additionally, we do not consider time-varying effects, as we focused on the contribution to time-averaged PM levels. Another important simplification is the treatment of separate days as independent. There are temporal trends in the treatment, response, and covariates, and our assumption of independence may inflate the amount of information that our data appear to have.

Direct Effect () Spillover Effect () Bandwidth ()
Local Only 1.15 (1.05,1.25) 0.11 (0.09, 0.12) 17.37 (8.28, 42.12)
Generalized propensity score 1.03 (0.93,1.14) 0.13 (0.03, 0.25) 1.53 (1.17, 2.88)
Table 3:

Posterior mean (95% Credible Interval)

Figure 2: Causal effect of a fire on PM by distance, as measured in degrees of latitude/longitude. The left axis shows the raw causal increase in PM; the right axis shows this as a percentage of annual mean PM levels.

8 Discussion

The generalized propensity score method presented here establishes a new framework to recover causal direct and spill-over effects in the presence of spatial interference. The inherent dimensionality issues of the problem are dealt with via a novel propensity score approach, which uses a Bayesian spline-based regression model and a dimension reduction approximation to make the problem tractable. However, there are several critical yet strong assumptions that must hold for our method to perform well. The method hinges on a correctly specified propensity score as well as a correctly specified potential outcomes model in (1). This includes accommodating conditionally dependent , and correctly characterizing the spatial dependence on from nearby . Moreover, the no unmeasured confounders assumption is always a strong, but necessary, assumption for causal inference on observational data. In practice considerable effort should be made to include any potential confounders for this reason. Lastly, we rely crucially on the assumption that the distribution of treatments can be encapsulated with the parameters of the propensity score . This will rarely be completely accurate in practice, so effort should be made to select an appropriate parametric form for .

Appendix A Proof of Theorem 1

Claim 1: is a balancing score.

Proof 1

By the definition of a propensity score, has the property that which implies  . And thus is a balancing score for our covariates . As noted by Hirano and Imbens (2004) this balancing is a characteristic of , and does not rely on any unconfoundedness in the response yet.

Claim 2: for all levels ,

(To ease notation, now let and .)

Proof 2

We can then write

Combining these gives Claim 2, which then implies our result.

Appendix B Bayesian estimation details for simulation

Uninformative priors are used for all parameters except which receives a mildly informative prior. Markov chain Monte Carlo iterations begin at maximum likelihood values for all parameters except , which requires an initial estimate. A burn-in length of 7,500 iterations is used, after which we sample 22,500 iterations. Gibbs sampling is used for all parameters except , which we transform and sample using Metropolis sampling, with an adaptive tuning scheme during the burn-in. Specifically, we use a normal proposal distribution for , where is the number of grid cells along each axis. This prevents the samples from becoming pathologically small, in which case the kernel cannot reach the neighboring cells and becomes arbitrary large. The comparison models are estimated with similar parameter settings.

For convenience, define

as the vector of

, , , and the spline coefficients; let ; let be the matrix with columns , , , and the B-splines bases; and let . We then specify


  • S. Athey, D. Eckles, and G. W. Imbens (2018) Exact p-values for network interference. Journal of the American Statistical Association 113 (521), pp. 230–240. Cited by: §1.
  • M. Bind (2019) Causal modeling in environmental health. Annual review of public health 40, pp. 23–43. Cited by: §1.
  • R. D. Brook (2007) Is air pollution a cause of cardiovascular disease? updated review and controversies. Reviews on environmental health 22 (2), pp. 115–138. Cited by: §7.
  • A. E. Corrigan, M. M. Becker, L. M. Neas, W. E. Cascio, and A. G. Rappold (2018) Fine particulate matters: the impact of air quality standards on cardiovascular mortality. Environmental research 161, pp. 364–369. Cited by: §7.
  • D. R. Cox (1958) Planning of experiments.. Wiley. Cited by: §1.
  • F. Dominici, R. D. Peng, M. L. Bell, L. Pham, A. McDermott, S. L. Zeger, and J. M. Samet (2006) Fine particulate air pollution and hospital admission for cardiovascular and respiratory diseases. Jama 295 (10), pp. 1127–1134. Cited by: §7.
  • P. H. Eilers and B. D. Marx (1996) Flexible smoothing with b-splines and penalties. Statistical science, pp. 89–102. Cited by: §5.
  • M. E. Halloran and C. J. Struchiner (1991) Study designs for dependent happenings. Epidemiology, pp. 331–338. Cited by: §1.
  • M. E. Halloran and C. J. Struchiner (1995) Causal inference in infectious diseases.. Epidemiology (Cambridge, Mass.) 6 (2), pp. 142–151. Cited by: §1.
  • K. Hirano and G. W. Imbens (2004) The propensity score with continuous treatments. Applied Bayesian modeling and causal inference from incomplete-data perspectives 226164, pp. 73–84. Cited by: §3, Proof 1.
  • M. G. Hudgens and M. E. Halloran (2008) Toward causal inference with interference. Journal of the American Statistical Association 103 (482), pp. 832–842. Cited by: §1.
  • A. E. Larsen, B. J. Reich, M. Ruminski, and A. G. Rappold (2018) Impacts of fire smoke plumes on regional air quality, 2006–2013. Journal of exposure science & environmental epidemiology 28 (4), pp. 319. Cited by: §7.
  • L. Liu and M. G. Hudgens (2014) Large sample randomization inference of causal effects in the presence of interference. Journal of the American Statistical Association 109 (505), pp. 288–301. Cited by: §1.
  • C. F. Manski (2013) Identification of treatment response with social interactions. The Econometrics Journal 16 (1), pp. S1–S23. Cited by: Example 1.
  • L. C. McCandless, I. J. Douglas, S. J. Evans, and L. Smeeth (2010) Cutting feedback in bayesian regression adjustment for the propensity score. The international journal of biostatistics 6 (2). Cited by: §5.
  • L. Ngo and M. P. Wand (2004) Smoothing with mixed model software. Cited by: §5.
  • D. Nychka, R. Furrer, and S. Sain (2014) Fields: tools for spatial data. r package version 7.1. Accessed online 10. Cited by: §7.1.
  • G. Papadogeorgou, F. Mealli, and C. M. Zigler (2019) Causal inference with interfering units for cluster and population level treatment allocation programs. Biometrics. Cited by: §1.
  • C. Perez-Heydrich, M. G. Hudgens, M. E. Halloran, J. D. Clemens, M. Ali, and M. E. Emch (2014) Assessing effects of cholera vaccination in the presence of interference. Biometrics 70 (3), pp. 731–741. Cited by: §1.
  • A. G. Rappold, W. E. Cascio, V. J. Kilaru, S. L. Stone, L. M. Neas, R. B. Devlin, and D. Diaz-Sanchez (2012) Cardio-respiratory outcomes associated with exposure to wildfire smoke are modified by measures of community health. Environmental Health 11 (1), pp. 71. Cited by: §7.
  • P. R. Rosenbaum and D. B. Rubin (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70 (1), pp. 41–55. Cited by: §3.
  • D. B. Rubin (1974) Estimating causal effects of treatments in randomized and nonrandomized studies.. Journal of Educational Psychology 66 (5), pp. 688. Cited by: §1.
  • D. B. Rubin (1980) Randomization analysis of experimental data: the fisher randomization test comment. Journal of the American Statistical Association 75 (371), pp. 591–593. Cited by: §1.
  • O. Saarela, L. R. Belzile, and D. A. Stephens (2016) A bayesian view of doubly robust causal inference. Biometrika 103 (3), pp. 667–681. Cited by: §5.
  • O. Saarela, D. A. Stephens, E. E. Moodie, and M. B. Klein (2015) On bayesian estimation of marginal structural models. Biometrics 71 (2), pp. 279–288. Cited by: §5.
  • M. E. Sobel (2006) What do randomized studies of housing mobility demonstrate? causal inference in the face of interference. Journal of the American Statistical Association 101 (476), pp. 1398–1407. Cited by: §1.
  • E. J. T. Tchetgen and T. J. VanderWeele (2012) On causal inference in the presence of interference. Statistical Methods in Medical Research 21 (1), pp. 55–75. Cited by: §1.
  • N. Verbitsky-Savitz and S. W. Raudenbush (2012) Causal inference under interference in spatial settings: a case study evaluating community policing program in chicago. Epidemiologic Methods 1 (1), pp. 107–130. Cited by: §1.
  • S. A. Weber, T. Z. Insaf, E. S. Hall, T. O. Talbot, and A. K. Huff (2016) Assessing the impact of fine particulate matter (pm2. 5) on respiratory-cardiovascular chronic diseases in the new york city metropolitan area using hierarchical bayesian model estimates. Environmental research 151, pp. 399–409. Cited by: §7.
  • S. N. Wood (2006) Low-rank scale-invariant tensor product smooths for generalized additive mixed models. Biometrics 62 (4), pp. 1025–1036. Cited by: §4.
  • C. M. Zigler, F. Dominici, and Y. Wang (2012) Estimating causal effects of air quality regulations using principal stratification for spatially correlated multivariate intermediate outcomes. Biostatistics 13 (2), pp. 289–302. Cited by: §1.
  • C. M. Zigler, K. Watts, R. W. Yeh, Y. Wang, B. A. Coull, and F. Dominici (2013) Model feedback in bayesian propensity score estimation. Biometrics 69 (1), pp. 263–273. Cited by: §5.
  • C. M. Zigler (2016)

    The central role of bayes’ theorem for joint estimation of causal effects and propensity scores

    The American Statistician 70 (1), pp. 47–54. Cited by: §5.