Average Treatment Effects in the Presence of Interference

04/08/2021 ∙ by Yuchen Hu, et al. ∙ 0

We propose a definition for the average indirect effect of a binary treatment in the potential outcomes model for causal inference. Our definition is analogous to the standard definition of the average direct effect, and can be expressed without needing to compare outcomes across multiple randomized experiments. We show that the proposed indirect effect satisfies a universal decomposition theorem, whereby the sum of the average direct and indirect effects always corresponds to the average effect of a policy intervention. We also consider a number of parametric models for interference considered by applied researchers, and find that our (non-parametrically defined) indirect effect remains a natural estimand when re-expressed in the context of these models.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The classical way of analyzing randomized trials, following Neyman (1923) and Rubin (1974), is centered around the average treatment effect as defined using potential outcomes. Given a sample of units used to study the effect of a binary treatment , we posit potential outcomes corresponding to the outcome we would have measured had we assigned the -th unit to control or treatment respectively, i.e., we observe . We then proceed by arguing that the (sample) average treatment effect


admits a simple unbiased estimator under random assignment of treatment.

One limitation of this classical approach is that it rules out interference, and instead bakes in an assumption that the observed outcome for any given unit does not depend on the treatments assigned to other units, i.e., is not affected by for any (Halloran and Struchiner, 1995). However, in a wide variety of applied settings, such interference effects not only exist, but are often of considerable scientific interest (Bakshy et al., 2012; Bond et al., 2012; Cai et al., 2015; Miguel and Kremer, 2004; Rogers and Feller, 2018; Sacerdote, 2001). For example, in an education setting, it may be of interest to understand how a didactic innovation affects not only certain targeted students, but also their peers. This has led to a recent surge of interest in methods for studying randomized trials under interference (Aronow and Samii, 2017; Eckles et al., 2017; Hudgens and Halloran, 2008; Leung, 2020; Li and Wager, 2020; Manski, 2013; Sävje et al., 2020; Tchetgen Tchetgen and VanderWeele, 2012).

A major difficulty in working under interference is that we no longer have a single “obvious” average effect parameter to target as in (1). In the general setting, each unit now has potential outcomes corresponding to every possible treatment combination assigned to the units, and these can be used to formulate effectively innumerable possible treatment effects that can arise from different assignment patterns. As discussed further in Section 2.1 below, the existing literature has mostly side-stepped this issue by framing the estimand in terms of specific policy interventions. While this approach is transparent, it forces the researcher to specify a policy intervention of interest—and doesn’t have the universality of an average estimand like .

In this paper, we study a pair of averaging causal estimands, the average direct effect and the average indirect effect, that are valid under interference yet, unlike existing targets, can be defined and estimated using a single experiment and do not need to be defined in terms of hypothetical policy interventions. Qualitatively, the average direct effect measures the extent to which, in a given experiment and on average, the outcome

of a unit affect by its own treatment ; meanwhile, the average indirect effect measures the responsiveness of to treatments given to other units .

The average direct estimand we consider is well known: It was proposed in Hudgens and Halloran (2008), and has recently been studied in depth by a number of authors including Sävje, Aronow, and Hudgens (2020). Our definition of the average indirect effect is to the best of our knowledge new, and is the main contribution of this paper. We follow this definition with a number of results to validate it. First, we prove a universal decomposition theorem whereby the sum of the average direct and indirect effects can always be interpreted as the total effect of an intuitive policy intervention. Second, we discuss a number of applied papers that consider parametrized notions of interference, and find that our (non-parametrically defined) average indirect effect can be naturally expressed in terms of the parametrizations considered by practitioners. Finally, we show that the indirect effect always admits an unbiased (not necessarily precise) estimator given data from only a single randomized experiment. As discussed further in Section 5, our hope is that the average indirect effect will provide researchers with a simple, non-parametric and universal estimand that can be used to get an initial impression of the strength (and qualitative aspects) of interference effects in an experiment, and then guide further investigations into the effectiveness of various policy interventions.

2 Treatment Effects under Interference

Throughout this paper, we study Bernoulli-randomized experiments using the potential outcomes model. The main difference between our setting and the standard Neyman-Rubin model is that, in Assumption 1, potential outcomes for the -th unit may also depend on the intervention given to the -th unit with . For a further discussion of potential outcomes under interference, see Aronow and Samii (2017) and Hudgens and Halloran (2008).

Assumption 1.

Our sample of size is determined by a set of potential outcomes for and

such that, given a treatment vector

, we observe outcomes .

Assumption 2.

We have a randomized experiment with a Bernoulli design, i.e., there is a (deterministic) vector such that the treatments are generated as


independently of each other and of the potential outcomes . We write for expectations over the random treatment assignment (2).

In this context, we define the average direct and indirect effects as follows. For convenience, we will often write use notation of the type to reference specific potential outcomes; here, means the outcome we would observe for the -th unit if we assigned the -th unit to treatment status , and otherwise maintained all but the -th unit at their realized treatments .

Definition 1 (Hudgens and Halloran (2008)).

Under Assumption 1, the average direct effect of a binary treatment administered in a Bernoulli trial as in Assumption 2 is

Definition 2.

Under Assumption 1, the average indirect effect of a binary treatment administered in a Bernoulli trial as in Assumption 2 is


The definition of the direct effect is natural. It measures the average effect of an intervention on the unit being intervened on—while marginalizing over the rest of the treatment assignments. In a study without interference, matches the average treatment effect (1). Furthermore, as shown in Sävje, Aronow, and Hudgens (2020), many popular estimators of the average treatment effect can in fact be re-interpreted as estimators for in the presence of interference.

Our definition of the indirect effect is an immediate formal generalization of to cross-unit treatment effects. It measures the average effect of an intervention on all units except the one being intervened on, again marginalizing over the rest of the process. However, unlike in the case of , it’s less obvious that would be of qualitative interest: In the no-interference case, we clearly have (as we would want), but interpreting in the presence of interference requires more work.

To this end, we provide a decomposition theorem that links the sum of the direct and indirect effects above to the total effect of nudging each unit’s probability upwards. We refer to this quantity as the effect of a policy intervention because (

5) is only defined in terms of observed outcomes (not potential outcomes corresponding to counterfactual treatment assignments), and thus is a quantity that could be measured by simply averaging observed outcomes for different randomization probabilities . Nudge effects as defined below are not (currently) widely used in the potential outcomes based interference literature, but they are prevalent in social sciences due to their ease of interpretation and desirable analytic properties; see, e.g., Chetty (2009), Wager and Xu (2021), and references therein.

Definition 3.

Under Assumptions 1 and 2, we define


In the special case where treatment assignment probabilities are a constant, i.e., there is a such that for all , takes a simpler form:


Note that this form in (6) appears more commonly in the social science literature and (5) can be seen as a generalization of (6).

Theorem 1.

Under Assumptions 1 and 2, the sum of the direct and indirect effects satisfies


By connecting our abstract notions of direct and indirect effects to the effect of a concrete policy intervention, Theorem 1 provides an alternative lens on our definition of the indirect effect. Suppose, for example, that a researcher knew they wanted to study nudge interventions (the total effect of which is ), and was also committed to the standard definition of the average direct effect given in Definition 1. Then, it would be natural to define an indirect effect as , i.e., to characterize as “indirect effect” any effect of the nudge intervention that is not captured by the direct effect; this is, for example, the approach (implicitly) taken in Heckman, Lochner, and Taber (1998). From this perspective, Theorem 1 could be seen as showing that these two possible definitions of the indirect effect in fact match, i.e., that . We emphasize that Theorem 1 is a direct consequence of Bernoulli randomization, and holds conditionally on any realization of the potential outcomes .

2.1 Alternative Indirect Effect Definitions

Before proceeding with our goal of further elucidating in the context different treatment models, we pause to compare the estimand with some alternative indirect effect definitions proposed in the literature. One early approach, due to Hudgens and Halloran (2008), is to define indirect effects as the effect of a policy shift on untreated units. In the context of Assumption 2, the definition of Hudgens and Halloran (2008) reduces to


where denotes the current randomization probabilities, and denotes an alternative set of randomization probabilities.

The definition of Hudgens and Halloran (2008) has considerable intuitive appeal as a quantification of indirect effects. However, as discussed in the introduction, one limitation of this approach is that one cannot talk about indirect effects without specifying an alternative randomization probabilities ; in particular, given a single randomized trial, it’s impossible to use (8) to understand whether interference occurred in that trial—even at the level of definitions. A second limitation of this definition, emphasized in Tchetgen Tchetgen and VanderWeele (2012) is that, when paired with the direct effect in Definition 1, does not satisfy a decomposition theorem analogous to Theorem 1; in particular the sum of and does not match the average effect of switching randomization probabilities from to .

Another popular approach to capturing indirect effects is via the exposure mapping approach developed in Aronow and Samii (2017). The main idea is to assume existence of functions such that potential outcomes only depend on via the compressed representation , i.e., whenever . Conceptually, the no-interference setting can be understood as a special case of the setting of Aronow and Samii (2017) with ; the exposure mapping framework then allows us to consider a wider (but still finite) variety of treatment signatures. Given this setting Aronow and Samii (2017) consider estimators of averages of potential outcome types and define treatment effects in terms of contrasts of these quantities (see also Leung (2020) and Karwa and Airoldi (2018) for extensions and further discussions):


Definitions of the type (9) are again conceptually attractive and sometimes enable us to very clearly express the answer to a natural policy question; see, e.g., Basse, Feller, and Toulis (2019). However, they again require the analyst to consider specific policy interventions to be able to even talk about indirect effects, and can also be unwieldy to use as the number of possible exposure types gets large.

Closest to the definition of is the average marginalized response (AMR) considered in the recent work by Aronow et al. (2020). The authors consider a setting where treatments are assigned to points in a geographic space. They focus on estimating the effect of treatment at an intervention point to the outcome of points that are away from the point at a specific distance :


where is the distance between point and intervention point . This “circle average” bears resemblance to Definition 2 in the sense that both of them are marginalized over variation in holding the treatment fixed. One key difference between the two is that defines a more “local” sense of indirect effect, focusing on the effects on points at a fixed distance , whereas is more “global”, taking into account effects on all other units. We also note that in defining , the authors normalize ’s effect by a factor of , whereas takes the sum of ’s effect on other units without normalization. This form without normalization is necessary for the decomposition in Theorem 1 to hold.

3 Models for Interference

Our discussion so far has focused on an abstract specification where direct and indirect effects are defined via various (marginalized) contrasts between potential outcomes. Much of the existing applied work on causal inference under interference, however, has focused on simpler parametric specifications that, e.g., connect outcomes to treatments via a linear model. The purpose of this section is to examine our abstract, non-parametric definition of the indirect effect given in Definition 2, and to confirm that it still corresponds to an estimand one would want to interpret as an indirect effect once we restrict our attention to simpler parametric models. Below, we do so in 3 examples. The claimed expressions for and are derived in Section 6.

Example 1.

In studying the spillover effects of insurance training sessions on insurance purchase, Cai, De Janvry, and Sadoulet (2015) work in terms of a network model: There is an edge matrix , such that can only affect if the corresponding units are connected by an edge . They then consider a linear-in-means model parametrized in terms of this network. For our purpose, we focus on a simple variant of the model of Cai et al. (2015) considered in Leung (2020), where only the effects of ego’s treatment and the proportion of treated neighbors are considered as covariates. This results in a linear model induced by the structural equation111The relation (11) should be taken as a structural model, meaning that we can generate potential outcomes in the sense of Assumption 1 by plugging candidate assignment vectors into (11), i.e., , for all and .


Under this model, it can be shown that


i.e., the estimands from Definitions 1 and 2 map exactly to the parameters in the model (11).

Example 2.

The model in Example 1 assumes the the -th unit responds in the same way to treatment assigned to any of its neighbors. However, this restriction may be implausible in many areas; for example, in social networks, there is evidence that some ties are stronger than others, and that peer effects are larger along strong ties (Bakshy et al., 2012). A natural generalization of Example 1 that allows for variable strength ties is to consider a saturated structural linear model


which allows for both unit-specific direct and indirect effects. Here the individual parameters in this model are not identifiable; however, under (13),


i.e., our estimands can be understood as averages of the unit-level parameters.

Example 3.

In studying the effect of persuasion campaigns or other types of messaging, it is common to assume that people respond most strongly if they get a communication directly addressed to them, but may also respond if a member of their neighborhood (or their household) gets a communication. This assumption can be formalized in terms of the following exposure mapping model: Each unit has 3 potential outcomes called “treated”, “exposed” and “none”, such that


Models of this type are considered by Sinclair, McConnell, and Green (2012) for studying voter mobilization, and by Rogers and Feller (2018) and Basse, Feller, and Toulis (2019) for studying anti-absenteeism interventions. Natural treatment effect parameters to consider following (9) include the average self-treatment and spillover effects


In model (15), if each neighborhood has units (i.e., each unit has neighbors who could put them in the exposed condition), and the treatment probabilities are constant at , then


We further note that models of the type (15) are often consider under two-stage randomized designs where at most one unit per neighborhood is treated. Our Bernoulli-randomized setup doesn’t allow for this. However, we can approximate it by making very small (in which case having any neighborhoods with 2 or more treated units becomes vanishingly rare); and, in this case, the expressions from (17) simplify,


and correspond exactly to the targets in (16), and the factor in (18) simply accounts for the fact that, any treatment will spread spillover effects to neighbors.

4 Unbiased Estimation

The main focus of this paper has been on establishing definitions of average treatment effect metrics under interference, and on verifying their interpretability under various modeling assumptions. But in order for these definitions to be useful in applied statistical work, we of course also need these metrics to be readily identifiable and estimable under flexible conditions. A comprehensive discussion of treatment effect estimation under interference—including distributional results and efficiency theory—is beyond the scope of this paper. However, as one result in this direction, we note here that unbiased estimates of the average direct and indirect effects are always available in Bernoulli-randomized experiments via the Horvitz-Thompson construction.

As above, the case of the average direct effect is already well understood in the literature. The Horvitz-Thompson estimator for is (Sävje, Aronow, and Hudgens, 2020)


Furthermore, as is implicitly established in the technical appendix of Sävje et al. (2020), this estimator is unbiased in Bernoulli-randomized experiments, i.e., .

Next, in discussing estimators for , we will work under a network interference model whereby the analyst has access to an interference graph and knows that the -th unit’s potential outcomes are unaffected by the treatment given to the -th unit whenever , i.e.,


for all and . This type of assumption is not required in principle; and in particular, the condition (20) is vacuous if for all pairs , i.e., if we assume that any treatment can affect any outcome. However, assumptions of this type are ubiquitous in practice (see, e.g., Examples 1 and 3 considered in Section 3), and when we have access to a sparse interference graph they can considerably improve the precision with which we can estimate indirect effects.

Given this setup, the Horvitz-Thompson estimator for is


Formally, this estimator looks like ; except now we are measuring associations between the -th unit’s treatment and the -th unit’s outcome, for . And, as in the case of the direct effect, this estimator in unbiased in Bernoulli-randomized experiments.

Theorem 2.

Under Assumptions 1 and 2, the Horvitz-Thompson estimator (21) is unbiased for the average indirect effect, .

Finally, we again emphasize that although results on unbiased estimation are helpful, they do not provide a complete picture of what good estimators for and should look like, and what kinds of guarantees we should expect. The direct effect estimator (19) is further studied by Sävje et al. (2020), who provide bounds on its mean-squared error under a moderately sparse network interference model as in (20); roughly speaking, they assume that the graph has degree bounded on the order of . Li and Wager (2020)

prove a central limit theorem for

under network interference with a random graph generative model. Meanwhile, in the the case of very sparse graphs ( has bounded degree), the indirect effect estimator could be studied using methods developed in Aronow and Samii (2017) and Leung (2020). However, in even moderately dense settings, Li and Wager (2020)

find that unbiased estimators of indirect effects may have vary large variance and caution against their use; they also propose alternative estimators that are more stable—again under a random graph generative model. To the best of our knowledge, efficiency theory for treatment effect estimation under interference remains as of now uninvestigated.

5 Discussion

There are many treatment effect estimation problems in which interference is present and needs to be accounted for, and in which indirect effects are of considerable scientific interest. In the existing literature, indirect effects are usually defined by either comparing outcomes across experiments with different randomization policies (Section 2.1), or by writing down a parametric model for interference (Section 3). In this paper, we contribute by proposing an alternative average indirect effect metric that can be non-parametrically defined using a single Bernoulli-randomized trial, thus providing practitioners with a starting point for investigating indirect effects that does not require reasoning in terms of additional data sources they do not currently have access to (or in terms of more elaborate models).

One challenge in defining average effect estimators in the presence of interference, however, is that our estimands will in general depend on the choice of Bernoulli-randomization probabilities . In Figure 1, we illustrate this phenomenon by plotting and as a function of in the following 3 structural models (while assuming constant treatment assignment probabilities ),


where in each case . Here, qualitatively, Setting 1 resembles the one considered by Cai, De Janvry, and Sadoulet (2015) and Leung (2020) as discussed in Example 1 from Section 3, Setting 2 exhibits a type of “herd immunity” where units are more sensitive to treatment when most of their neighbors are untreated, while Setting 3 has complicated non-linear interference effects. We then see that, in Setting 1, and do not vary with , but in Settings 2 and 3 they do—and may even change signs.

Figure 1: Illustration of , , and : The purple curve corresponds to the expected potential outcome . The slope of the line segments on the purple curve (derivative of ) is the same as the value of the blue curve (). Theorem 1 establishes that . In the plots, the blue curve () corresponds to the sum of the red curve () and the green curve (). We consider the 3 settings listed in (22) where, in all cases, we assume constant treatment assignment probabilities and take the number of neighbors to be .

This potential dependence of and on is something that any practitioner using these estimands needs to be aware of. However, we believe such dependence to be largely unavoidable when seeking to define non-parametric estimands under the generality considered here. For example, when estimating indirect effects of immunization in a population where roughly 30% of units have been immunized, definitions of the type developed here could be used to support non-parametric analysis of indirect effects. Now, one should be cognizant that any such effects would be “local” to the current overall immunization rate at 30%, and would likely differ from indirect effects we would measure at a 50% overall immunization rate. However, it seems unlikely that one could use data from a population with 30% immunization rate to non-parametrically estimate average outcomes we might observe at a 50%; rather, to do so, one would need to either posit a much more elaborate model for how infections spread.

This tension between non-parametrically defined estimands that are local to a given experimental setup and structural estimands that require more in-depth modeling of the interference mechanism has a number of parallels in other areas of causal inference. One notable example arises when estimating the effect of a binary treatment under endogeneity. A large fraction modern empirical work in this area builds on the result of Imbens and Angrist (1994), who established that instrumental variables methods can be used to non-parametrically identify what is called the local average treatment effect, i.e., a treatment effect that is local to the subpopulation of units that respond to the instrument (and this subpopulation may change if we used different instruments); and recently, there has been considerable discussion around the relative merits of targeting classical structural parameters versus local average treatment effects, which are easier to identify and estimate but may not always correspond to policy relevant parameters (see, e.g., Deaton (2009), Heckman and Urzua (2010) and Imbens (2010) for a recent exchange). In the long run, it seems plausible that—much like in the case of treatment effect estimation under endogeneity—both local and structural estimands will prove to be useful parts of a broad statistical toolkit for treatment effect estimation under interference.

6 Proofs

6.1 Proof of Theorem 1

We start with a slightly more formal form of the nudge effect:

i.e., we take derivative with respect to and evaluate at . For index , we can rewrite in terms of the potential outcomes:

The dependency of this term on is clear in this form. Taking a derivative with respect to and evaluating at yields

If we sum over the index and and multiply the term by , we get

This can be further decomposed into two terms: summation over and .

The first term matches the definition of and the second term matches the definition of . Thus

6.2 Proof of Theorem 2

For each pair of indices , we can write as

Thus summing over and , we get

When there is no edge connecting and , the value of does not depend on , hence . Hence we can rewrite the above term:

6.3 Proof of (12)

It’s clear from (11) that and . Hence

6.4 Proof of (14)

Model (13) implies that and . Hence

6.5 Proof of (17)

For unit , if none of its neighbors are treated, then . Otherwise . The probability that none of its neighbors are treated is . Thus the direct effect is

The first term can be written as

The second term can be written as

Combining the two terms, we have

Now we consider the indirect effect. For indices such that , note that is non-zero if and only if is not treated and has no treated neighbors except (possibly) , in which case . The probability of the above event is . Also note that for indices not connected, since does not depend on . Hence


  • P. M. Aronow, C. Samii, and Y. Wang (2020) Design-based inference for spatial experiments with interference. arXiv preprint arXiv:2010.13599. Cited by: §2.1.
  • P. M. Aronow and C. Samii (2017) Estimating average causal effects under general interference, with application to a social network experiment. The Annals of Applied Statistics 11 (4), pp. 1912–1947. Cited by: §1, §2.1, §2, §4.
  • E. Bakshy, I. Rosenn, C. Marlow, and L. Adamic (2012) The role of social networks in information diffusion. In Proceedings of the 21st international conference on World Wide Web, pp. 519–528. Cited by: §1, Example 2.
  • G. W. Basse, A. Feller, and P. Toulis (2019) Randomization tests of causal effects under interference. Biometrika 106 (2), pp. 487–494. Cited by: §2.1, Example 3.
  • R. M. Bond, C. J. Fariss, J. J. Jones, A. D. Kramer, C. Marlow, J. E. Settle, and J. H. Fowler (2012) A 61-million-person experiment in social influence and political mobilization. Nature 489 (7415), pp. 295–298. Cited by: §1.
  • J. Cai, A. De Janvry, and E. Sadoulet (2015) Social networks and the decision to insure. American Economic Journal: Applied Economics 7 (2), pp. 81–108. Cited by: §1, §5, Example 1.
  • R. Chetty (2009) Sufficient statistics for welfare analysis: a bridge between structural and reduced-form methods. Annu. Rev. Econ. 1 (1), pp. 451–488. Cited by: §2.
  • A. Deaton (2009) Instruments of development: randomisation in the tropics, and the search for the elusive keys to economic development. In Proceedings of the British Academy, Vol. 162, pp. 123–160. Cited by: §5.
  • D. Eckles, B. Karrer, and J. Ugander (2017) Design and analysis of experiments in networks: reducing bias from interference. Journal of Causal Inference 5 (1). Cited by: §1.
  • M. E. Halloran and C. J. Struchiner (1995) Causal inference in infectious diseases. Epidemiology, pp. 142–151. Cited by: §1.
  • J. J. Heckman, L. Lochner, and C. Taber (1998) General-equilibrium treatment effects: a study of tuition policy. The American Economic Review 88 (2), pp. 381–386. Cited by: §2.
  • J. J. Heckman and S. Urzua (2010) Comparing IV with structural models: what simple IV can and cannot identify. Journal of Econometrics 156 (1), pp. 27–37. Cited by: §5.
  • M. G. Hudgens and M. E. Halloran (2008) Toward causal inference with interference. Journal of the American Statistical Association 103 (482), pp. 832–842. Cited by: §1, §1, §2.1, §2.1, §2, Definition 1.
  • G. W. Imbens and J. D. Angrist (1994) Identification and estimation of local average treatment effects. Econometrica, pp. 467–475. Cited by: §5.
  • G. W. Imbens (2010) Better LATE than nothing: some comments on Deaton (2009) and Heckman and Urzua (2009). Journal of Economic literature 48 (2), pp. 399–423. Cited by: §5.
  • V. Karwa and E. M. Airoldi (2018) A systematic investigation of classical causal inference strategies under mis-specification due to network interference. arXiv preprint arXiv:1810.08259. Cited by: §2.1.
  • M. P. Leung (2020) Treatment and spillover effects under network interference. Review of Economics and Statistics 102 (2), pp. 368–380. Cited by: §1, §2.1, §4, §5, Example 1.
  • S. Li and S. Wager (2020) Random graph asymptotics for treatment effect estimation under network interference. arXiv preprint arXiv:2007.13302. Cited by: §1, §4.
  • C. F. Manski (2013) Identification of treatment response with social interactions. The Econometrics Journal 16 (1), pp. S1–S23. Cited by: §1.
  • E. Miguel and M. Kremer (2004) Worms: identifying impacts on education and health in the presence of treatment externalities. Econometrica 72 (1), pp. 159–217. Cited by: §1.
  • J. Neyman (1923) Sur les applications de la théorie des probabilités aux experiences agricoles: essai des principes. Roczniki Nauk Rolniczych 10, pp. 1–51. Cited by: §1.
  • T. Rogers and A. Feller (2018) Reducing student absences at scale by targeting parents’ misbeliefs. Nature Human Behaviour 2 (5), pp. 335–342. Cited by: §1, Example 3.
  • D. B. Rubin (1974) Estimating causal effects of treatments in randomized and nonrandomized studies.. Journal of Educational Psychology 66 (5), pp. 688. Cited by: §1.
  • B. Sacerdote (2001) Peer effects with random assignment: results for Dartmouth roommates. The Quarterly Journal of Economics 116 (2), pp. 681–704. Cited by: §1.
  • F. Sävje, P. M. Aronow, and M. G. Hudgens (2020) Average treatment effects in the presence of unknown interference. The Annals of Statistics forthcoming. Cited by: §1, §1, §2, §4, §4.
  • B. Sinclair, M. McConnell, and D. P. Green (2012) Detecting spillover effects: design and analysis of multilevel experiments. American Journal of Political Science 56 (4), pp. 1055–1069. Cited by: Example 3.
  • E. J. Tchetgen Tchetgen and T. J. VanderWeele (2012) On causal inference in the presence of interference. Statistical Methods in Medical Research 21 (1), pp. 55–75. Cited by: §1, §2.1.
  • S. Wager and K. Xu (2021) Experimenting in equilibrium. Management Science forthcoming. Cited by: §2.