Relaxed covariate overlap and margin-based causal effect estimation

01/02/2018 ∙ by Debashis Ghosh, et al. ∙ 0

In most nonrandomized observational studies, differences between treatment groups may arise not only due to the treatment but also because of the effect of confounders. Therefore, causal inference regarding the treatment effect is not as straightforward as in a randomized trial. To adjust for confounding due to measured covariates, a variety of methods based on the potential outcomes framework are used to estimate average treatment effects. One of the key assumptions is treatment positivity, and methods for performing causal inference when this assumption is violated are relatively limited. In this article, we explore the issue of covariate overlap and discuss a new condition involving overlap in the convex hulls of treatment groups, which we term relaxed covariate balance. An advantage of this concept is that it can be linked to a concept from machine learning, termed the margin. Introduction of relaxed covariate overlap leads to an approach in which we can perform causal inference in a three-step manner. The methodology is illustrated with two examples.



There are no comments yet.


page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For many scientific settings, researchers wish to understand the effect of an intervention on a response. While randomization of the intervention and its evaluation in a prospective clinical trial can provide strong assessments in many instances, for other situations, it is not possible to conduct such a study due to administrative and/or ethical constraints. Thus, many investigators are left with having to evaluate effects of interventions in observational, non-randomized studies. This has spawned much research interest in the area of causal inference primarily based on use of the potential outcomes framework (e.g., [1]).

Recently, much attention in the literature on causal inference has been paid to the issue of covariate balance. This has to do with ensuring that the distributions of confounders between the treatment and control groups have sufficient overlap. This is related to the treatment positivity assumption that is outlined in §2.1. One way covariate balance is checked in practice is by comparing distributions of individual confounders between the two treatment groups in matched samples (e.g., Chapter 14 of [1]). Matching is typically performed to have robust estimation of a causal effect. Here, robustness means that estimation of the causal effect does not require extrapolation of the potential outcomes to regions of the covariate space in which observations from one treatment group is missing. This phenomenon is nicely illustrated in a simple one-dimensional example in Figure 1.

Figure 1:

Histogram of 200 observations, 100 of which are simulated from a normal distribution with mean

and unit variance, and the other 100 are simulated from a normal distribution with mean

and unit variance. The blue density corresponds to the former population, while the green density corresponds to the latter.

The blue and green density lines represent two different populations. Towards the left of the figure, most of the observations come from the blue population, while the reverse is true in the right-hand side of the picture. Causal inference is about attempting to infer differences in the outcomes between the blue and green populations. The portion of the picture where robust causal inference could be performed would be the region where the densities of the two populations intersect, which is in the middle of the picture. If one wished to make causal inference in the left part of the picture, this would require model-based extrapolation of outcomes for the green group, and conversely, for the right-hand side of the picture, model-based extrapolation of outcomes for the blue group would be needed. Thus, I am defining any situation where model-based extrapolation is needed as not being robust.

There has been great attention paid to the use of matching techniques for causal effect estimation [2, 3, 4]. A relatively new thread of statistical research has been to focus on estimation procedures that seek to optimize covariate balance in causal effect modelling. This can be done either by modelling the propensity score to satisfy covariate balance [5], using calibration estimators originally introduced in the survey sampling literature that will satisfy covariate balance [6] or by matching [7]. These procedures have been shown to yield weights that are less extreme and lead to causal effect estimators with better properties.

One situation where covariate balance does not occur is limited treatment overlap, described in [8]. They characterized its effects in a setting with limited numbers of covariates and developed a simple rule to exclude subjects based on the propensity score. The procedure of Crump et al. [8] relies on having available propensity score estimates that are consistent for the true propensity score. There are two limitations with their methodology. First, the procedure might not be very robust to model misspecification. Second, it has been been pointed out by several authors that the propensity score might have problems in higher dimensions. To address this, one proposal was given in [9] and consisted of using classification and regression trees (CART). It takes the Crump et al. [8] definition of a study population with sufficient overlap and then fits a classification tree to whether or not the subject is in the final population or not. The procedure in [9] leads to interpretable regions for which one can define a study population for which one can make causal inferences about.

The proposals of [8] and [9] amount to region identification in which there is sufficient covariate balance between the treatment and control groups. Similarly, Ratkovic [11] developed an approach to causal effect estimation based on support vector machines [10]. In this article, I focus on the use of the margin for causal effect estimation. The contributions in this paper are the following:

  1. Development of a characterization of covariate overlap in a multivariate sense, termed relaxed covariate overlap, using geometric ideas and relating the problem to margin-based classification.

  2. Development of a simple three-step approach to causal inference estimation that avoids the tautology of covariate balance checking.

  3. Extension of the margin-based approach to multicategorical and continuous treatments.

As described in [12]

, targeting the observations where covariates overlap should lead to causal effect estimation that does not require model extrapolation and typically runs counter to what standard classifiers wish to do, which is to find maximal separation between populations. It will be seen that in the multivariate case, relaxed covariate overlap can be characterized using linear hyperplanes and can thus be tied to the support vector machines. An implication of the methodology is that by excluding subjects, the causal estimand effectively becomes data-adaptive. We provide some justification for the use of data-adaptive estimands in §3.1. and §3.2. The structure of this paper is as follows. In Section 2, we summarize the potential outcomes framework and describe work in

[8] and [9] on methods for causal effect estimation in situations where treatment positivity is violated. Section 3 introduces the relaxed covariate balance condition and demonstrates how it can be tied to a well-known problem in computational geometry, that of determining overlaps of convex hulls of points. Based on the equivalence, I outline a three-step approach to causal effect estimation. In Section 4, two examples are used to illustrate the methodology. Some discussion concludes Section 5.

2 Background

2.1 Preliminaries and Causal inference assumptions

In this paper, I will employ the potential outcomes framework [13, 14], which has been widely used in causal modelling. I assume the Stable Unit Treatment Value Assumption (SUTVA), which states that the potential outcomes for subject is statistically independent of the potential outcomes for all subjects , . Let denote the response of interest and be a -dimensional vector of confounder. Let be a binary indicator of treatment exposure that takes the values , where if treated and if control. Let the observed data be represented as , , a random sample from . Define to be the potential outcomes under control and treatment for subject , where .

What the analyst observes is , which implies that and can not be observed simultaneously for the th subject. Two possible parameters of interest are the average causal effect:


and the average causal effect among the treated:


ACET is of particular interest when the population of the study are those who actually receive the treatment. For example, a smoking cessation researcher may wish to know that for those who actually smoke, what is the difference in the expected life expectancy if they did not smoke? In this example, the researcher is interested in estimating ACET.

Here and in the sequel, I focus on ACE. In a randomized study, the treatment assignment is completely determined by randomization. Therefore,

. Consequently, an unbiased estimator for ACE is given by

In an observational study, the vector of covariates could be related to both the outcome and the treatment assignment. Since both and the potential outcomes {Y(0),Y(1)} are affected by , will not hold. To enable causal inference in this scenario, I make the following further assumptions.

  1. Strongly Ignorable Treatment Assumption (SITA): is independent of given .

  2. Treatment Positivity Assumption (TP): for all values.

SITA means that the potential outcomes are conditionally independent of treatment given the confounders. Conceptually, an implication of SITA is that by conditioning on the same value of Z, we can assume that the observed outcomes behave as if they came from a randomized study. Rosenbaum and Rubin [15] show that if SITA holds, then the treatment is independent of the potential outcomes given the propensity score, defined as . It is also referred to as the ‘no unmeasured confounders’ assumption in the statistical literature [16]. The positivity assumption TP means that the probability of receiving treatment is positive for any individual in the study. I note that this assumption could be relaxed to the following: whenever .

In practice, the TP assumption ensures sufficient covariate overlap. Balance is necessary in order to develop reliable estimates of causal effects that do not rely on model extrapolation. There has been a lot of work on developing reliable balance metrics (e.g., [17, 18]). However, these procedures implicitly rely on treatment positivity. I next summarize the proposals of [8] and [9] to diagnose and correct for violations in the treatment positivity assumption.

2.2 Related work

Crump et al. [8] noted the possibility that assumption (2), that of treatment positivity, could be violated. In this case, one conceptual potential outcome of the individual will never be observed. In [8], the authors define a subpopulation covariate effect using the propensity score. Define the region as . It will be of the form for some . Suppose we observe the data , . Assume also that the propensity score is known. Then Crump et al. [8] define the subpopulation average causal effect as

Note that depends on the region of the propensity scores that is in . In practice, the propensity score is not known and must be estimated from the data.

As pointed out in [8], construction of the region leads to a tradeoff. The sample size is reduced from to , which will lead to increased variability of estimated effects. On the other hand, the narrowing of the population to subjects whose covariate values are sufficiently balanced will tend to lead to diminished variability in the causal effect estimates. To simplify calculations, Crump et al. [8] base inference for the subpopulation covariate average effect conditional on the region; in other words, they ignore variability in the estimation of the region . Conditional on the first-stage, they propose an optimization criterion based on the variability of the estimated subpopulation average causal effect. Crump et al. [8] show under some mild assumptions that an optimal exists, depends only on the marginal distribution of the propensity scores and propose a simple algorithm for its estimation. The search algorithm is one-dimensional due to the dimension reducing property of the propensity score from dimensions to a scalar, but it also highlights the dependence on the fitted propensity score model. If the propensity score model is misspecified, then the approach in [8] might lead to very biased results. An alternative approach that is more robust was done using classification and regression trees in [9]. Classification and regression trees fit piecewise constant partitions to the observed covariate space. While the Traskin and Small [9] algorithm should be more robust to model misspecification relative to the proposal in [8], tree models have some limitations as well. In particular, the types of variables that are fit to the data are axis-parallel planes, which might be too restrictive a class. In addition, the categorization of observations in [9] depend on good estimates of the propensity score. If there is misspecification in that step, then the tree models are being effectively fit to misspecified outcomes. Finally, the tree model fit is based on observed covariates. Thus, the procedure works well if there are covariates that determine the sufficient overlap. If this is not true, then the proposed procedure will still not provide sufficient balance.

It should also be noted that there are in fact two types of violations of positivity assumptions to consider. The first is inherent to the model itself, while the second has to do with violations given a finite dataset. The former might be referred to as a “structural” positivity assumption violation, while the latter is a ‘practical’ positivity assumption violation. Here, we mostly deal with the latter situation.

3 Proposed Methodology

3.1 Theoretical considerations

Before describing the proposed approach, it is important to point out that it will effectively amount to defining a data-driven causal estimand. While there has been a major focus in causal inference to define the appropriate scientific estimand before data collection and to thus avoid the use of data-driven causal estimands, I provide some justification for this approach in the current setting.

While §2.1. discusses the TP assumption in the potential outcomes framework, I recall the work of [19], who show that for semiparametric estimators of the average causal effect to have regular behavior in the high-dimensional case, the model classes for the propensity score and outcome models have to be well-behaved. For the propensity score, this involves strengthening the TP assumption to the following:


for some uniformly in . Note that this means that the propensity score is uniformly bounded away from zero and one. This is different from the TP assumption in that the latter does not require uniform boundedness away from zero and one. Violations in (3) lead to irregularities in estimation and inference. It it stronger than the treatment positivity assumption in that does not depend on . Recently, [20] show how (3) implies a bounded likelihood ratio for the distributions of confounders conditional on treatment groups. The implication is that (3) becomes a more restrictive assumption as the number of covariates increases. However, if this assumption does not hold, then this allows for ‘pathological’ data-generating distributions as described in [19] that lead to causal effect estimators with irregular asymptotic properties. This issue was described in [21] and [22]. The approach of identification of the margin as well as estimating conditioned causal effects, while being data-adaptive, potentially avoids the problem of irregular statistical behavior that would plague average causal effect estimators in the high-dimensional confounder setting.

3.2 Practical considerations and analogy with propensity score matching

The class of methods considered in this paper involves defining data-adaptive causal estimands. This is similar to the proposals in [8] and [9]. We wish to point out that another popular approach to estimation of causal effects, that of propensity score matching, implicitly involves use of a data-adaptive estimand. Typically, treated and nontreated subjects with similar propensity scores are matched to each other, and some subjects are excluded from the matched dataset if comparable subjects from the other treatment group cannot be found. Causal effect estimation then proceeds on the matched dataset. While this is a commonly used approach to estimation of causal effects, I note that what is being estimated is a causal estimand that is data-adaptive in nature.

Much of my approach in the current article mimics what is done with propensity score matching. As a heuristic, analysts who engage in the use of propensity score matching typically take the following steps:

  • Fit a propensity score model to the data;

  • Match using some algorithm based on the propensity score;

  • Check for balance in covariates based on the matched dataset; if there is imbalance, repeat steps (a)-(c). using transformations of covariates that violate the balance condition.

  • Estimate the causal effect in the matched dataset.

For each of the steps there is a variety of choices one can use. However, as discussed in Theorem 10.1 of [1]

, at the last step, there is no adjustment for the standard errors. They show from their theorem that this variance estimator will overestimate the true variance so that any inferences being made will be conservative. I adopt the same approach to inference in this paper.

I note that the approach taken in the paper is but one method with which to handle the issue of limited treatment positivity. A fuller account can be found in [23], but alternative approaches include restricting the space of treatments, redefining the causal estimand, and using alternative projection functions.

3.3 Geometric viewpoint

To motivate the methodology, I first introduce the concept of a convex hull for a set of points.

Definition. Let be a set of points . Then the convex hull of is given by

Thus, one sees that the convex hull consists of all convex combinations of points in . There are characterizations of a convex hull equivalent to the definition given here, including the unique minimal convex set containing , the intersection of all convex sets containing , and the union of all simplices whose vertices are points in

. Intuitively, a convex hull will be a multidimensional regions that is ‘filled up” and also has no ‘holes’ in it. In addition, by looking at combinations of the observed data points, then by definition any interpolation being done in the generation of the convex hull is a function of the data points only and avoids the model extrapolation issue referred to in the Introduction. Further details on convex hulls and related topics can be found in

[24] and [25].

A natural extension of covariate overlap to the multivariate case is to require that . However, finding convex hulls in higher dimensions is a very computationally challenging problem, and Chazelle [26] has shown that the computational complexity of the problem is . I instead focus on the problem of a nonempty intersection of the convex hulls, i.e.

is non-empty. We refer to this condition as relaxed covariate overlap. Let the number of subjects with and be and respectively. Denote the and matrix of confounders by and .

This discussion naturally leads into consideration of the following optimization problem: minimize over and


We note that (4) is equivalent to the problem of finding the two closest points in the convex hulls and . If there is a solution to (4), then the convex hulls do not overlap and means that there is no covariate balance between the treated and control groups. Equivalently, the relaxed covariate overlap condition is not satisfied. Conversely, if there is no solution to (4), then relaxed covariate overlap between the and populations is satisfied. I next derive the dual optimization problem of (4).

Theorem 1. Minimizing (4) over and is the dual optimization problem of minimizing


over and .

Proof: The Lagrangian associated with (5) is given by

We now seek to maximize with respect to , , , and . Differentiation with respect to these parameters and setting them equal to zero yields


subject to and . Plugging in and simplifying (6) yields the result of the theorem.

There is a natural interpretation of Theorem 1 as well. The functions and define hyperplanes, and an implication of Theorem 1 is that there is a solution of (4) if and only if there is a solution to (5). The equations and mean that there exists a hyperplane that perfect separates the confounders for the and populations. In machine learning terminology, this scenario corresponds to the observations being linearly separable. Furthermore, provide there exists a solution to the problem in Theorem 1, then defines the hyperplane that maximizes the distance between the supporting hyperplanes for and .

The quantity is referred to as the margin in the machine learning literature [27]. The optimization problems (4) and (5), in words, represent the following equivalence:

If the data are linearly inseparable, then this means that the convex hulls of and will overlap. The points in the overlap will be identical to those that fall in the margin and represent those observations in which the possibility of causal inference without model extrapolation is feasible. This is very intuitive in the sense that for points in the margin, it is difficult to classify them into treatment groups, so these are the observations for which the TP assumption will be valid. This also suggests that to identify observations that satisfy covariate balance, it is important to target the margin as the criterion on which to optimize. At a high level, the approach being proposed here amounts to the following:

  1. Fit a model to the data , .

  2. Determine the observations that are in the margin. Let this set of observations be denoted as .

  3. Estimate the causal effect of interest using , .

Going through this progression, the goal of the first two steps is to identify the observations which are likely to satisfy the treatment positivity assumption. The first step is typically done using propensity score models, although other models could be entertained at that step. In step two of the procedure, observations that are not in the margin will be discarded from the analysis. Thus, the determination of the margin step leads to selection of observations for performing causal inference. Another approach by which observations get discarded is in matching, where unmatchable observations are removed (e.g., [17]). However, there is no underlying concept of a margin in that approach. Finally, an advantage of this procedure is that it does not require balance checking, as is the norm in causal inference problems.

Remark 1: Ratkovic [11]

developed an algorithm for achieving balance in causal inference problems that uses support vector machines. Ratkovic identifies the points in the margin as being the relevant ones for causal inference. However, the derivation presented there involves first-order conditions based on optimizing the penalized loss function corresponding to SVM. By contrast, I have started from geometric principles of overlapping convex hulls in order to identify the margin.

3.4 Support Vector Machines

As alluded to earlier, support vector machines (SVMs) represent another class of algorithms that seek to optimize the margin. The objective of SVM is to find a linear hyperplane that maximizes the margin between the populations defined by and . SVMs are formulated using the following optimization problem: minimize as a function of and the norm of subject to , . Here, denotes the inner product between vectors and , with . Note that is proportional to the margin. Finding the hyperplane that maximizes the margin is equivalent to minimizing the square of the inverse of the margin. This turns out to be a quadratic programming problem and can be phrased formally using Lagrange multipliers as


Using the Karuhn-Kush-Tucker conditions from optimization theory, it turns out that the solution can be represented as , where only a subset of the observations will have . The remaining observations will have . The subjects for which are termed the support vectors. It turns out the margin depends only on the support vectors. Thus, one of the appealing features of SVMs is that they are sparse in the observations in the dataset. This yields a very simple classification rule. If , then predict ; if , predict . Thus, using the arguments in §3.1., if the data are linearly separable using this hyperplane, then one would not have relaxed covariate overlap. Thus, the observations that violate the classification rule, which are equivalent to the misclassified observations, represent the margin for which I will use to perform causal inference. Intuitively, this makes sense as points in the margin represent those points about which there is uncertainty as to the classification of the subject (i.e., or ), while for those points outside of the margin, one is quite certain as to their treatment label. The key point is that the SVM defines the region , which represents the part of the sample for which a causal effect will be estimated. This is in keeping with the idea posited in [28] that the population under study can be a subpopulation of the original population based on observed covariate values.

To graphically illustrate the concept, I simulated data using two bivariate normal populations. The plotted data and the fitted SVM-based margin are shown in Figure 2.

Figure 2: Example of the margin concept in a bivariate normal example. I have simulated two populations, represented as circles and triangles. The circles are generated from a bivariate normal distribution with mean , while the triangles are generated from a bivariate normal distribution with mean . Both distributions have an identity variance-covariance matrix. I have fit a support vector machine to the data using a linear kernel with cost parameter . The observations that are in the margin are filled in (i.e., the filled-in circles and triangles). The proposal is to use the black points for causal effect estimation.

3.5 Causal effect estimation, inference and other outputs

In this article, I focus on the estimation of a particular subpopulation average causal effect. Recall the general three-step approach outlined in §3.3. : (a) fit a propensity score model to the data , ; (b) identify the observations in the margin based on the model fit in (a) and label the observations in the margin as ; (c) perform causal effect estimation using , . The goals of steps (a) and (b) are to define the subdataset on which I will estimate a causal effect. This is keeping with the principle outlined in [17] that there can be a preprocessing step in which observations are discarded before beginning to perform causal inference. The preprocessing here is to remove the observations that violate treatment positivity, which are the observations that do not fall into the margin. This is also keeping in line with the spirit of [29], who advocates separating the outcome model step from the propensity score model step and to not have feedback between two. Steps (a) and (b) only involve the propensity score model, and an appropriate population of interest gets determined at this stage. Thus, the inference in step (c) will be conditional in that the margin needs to be found first, and then causal effect estimation in step (c) happens conditional on the margin.

For step (c), there are many choices available for causal effect estimation. For the purposes of illustration, here I will use the optimal matching approach described in [2]. However, as discussed in [30], there are many ways to perform causal inference in step (c). I take the approach of [17] in that no adjustment needs to be made to the standard errors based on the analysis in the matched sample. Evaluating various approaches to standard error estimation for the causal effect in the matched sample setting remains an open topic and the subject of future investigation.

One of the features of the proposed use of the margin for causal inference is that one can assess violations in the TP assumption. Recall that this means that for all values of . In words, this means that the probability of receiving treatment is positive for any individual in the study. However, there may be many situations when this assumption will be violated. This could occur for practical or empirical reasons. In the idealized setting of a clinical study, only certain individuals might be allowed to receive treatment based on observed covariate values. Points that are outside of the margin will represent those where TP is violated. As in [9], I could model being in the margin using a classification and regression tree, which might provide a nice interpretable descriptive summary of what factors define being in the margin.

Remark 2: There has been much success in combining the propensity score modelling with the mean outcome modelling for causal modelling using collaborative targeted learning [21]. In fact, in Chapter 21 of [31], it is shown that this approach can successfully deal with problematic causal inference scenarios. In principle, it may be possible to extend the targeted learning roadmap to include margin identification as part of the approach, but this is beyond the scope of the current manuscript.

3.6 Extension to multicategorical treatments

One of the advantages of the geometric notion of the overlap described in §3.3. is that it admits a natural extension of multicategorical treatment variables and thus allows for a natural extension of the causal inference approach to multiple treatments. I note that the problem has received less attention than the binary case, although some exceptions include [32, 33].

I need to modify the assumptions from §2.1. to accomodate multiple treatment levels. Let take values . I then make assumptions generalizing those in §2.1:

  1. The potential outcomes for subject is statistically independent of the potential outcomes for all subjects , .

  2. is independent of given .

  3. for all values and for all .

Arguing as in §3.3., one can get to the following equivalence:

Thus, one could envision performing either a multi-class SVM in order to derive the margins. The approach in the multicategorical case is to take the union of the margins from all classifiers as the meta-margin and to perform appropriate causal comparisons based on pairwise treatment comparisons. This is described in the choliangocarcinoma example in §4.1.

3.7 Extension to continuous treatments

As described in [33] and [34], there can be situations in which the causal estimand of interest is defined based on a variable that is continuous. For these situations, development of the margin is not intuitive at first glance. I will use the arguments in [35] in order to extend the margin-based causal approach to accommodate continuous treament variables.

In the previous sections, I have fit support vector machines for treatment. In the case where it is continuous, a natural analog is support vector regression. Statistically, this can be be expressed as the following optimization problem: for a fixed ,


is a smoothing parameter and denotes the norm of a function in a reproducing kernel Hilbert space (RKHS; [36]). As in §3.1., I will choose the RKHS that corresponds to the linear kernel. I will also define a hard tube as a hyperplane such that for ,

It is easy to see that if by choosing to be sufficiently large, then a hard tube will exist. Using Gale’s Theorem [37], a hard tube exists if and only if the following system of equations in has no solution:

I now define the sets and . Using the arguments in [35], it can be seen that the existence of a hard -tube is equivalent to the convex hulls of and being separable. Thus, I have recast the problem of margin for the support vector regression into the setup presented in §3.1.

Unlike in the binary and categorical data cases, one is unable to use full matching in order to perform causal effect estimation. Instead, I will adopt the approach used in [34], which is to use generalized boosting [38] in conjunction with normal model fitting and weighted estimation in order to estimate causal parameters. Further details about the implementation can be found in the example in §4.2.

Remark 3: Recently, [39]

showed that violations of the treatment positivity assumption in marginal structural models can manifest in observations with very large weights. In an analysis of a database designed to study the effect of warfarin on the risk of bleeding, they found that the uncritical use of marginal structural models yielded an odds ratio of 17.2, while using restricted weights in the marginal structural models yielded an odds ratio of 2.0. The latter was much more in line with results in the field.

4 Numerical Examples

4.1 Choliangocarcinoma Data

The first example comes from a dataset of 3894 patients with intrahepatic cholangiocarcinomas (IHC) that was previously studied in [40]. In this study, the effect of radiation and surgery on patient was survival in this population was explored using data extracted from the Surveillance, Epidemiology and End Results (SEER) registry. Note that there are four levels of treatment: no treatment, radiation only, surgery only and combined (radiation and surgery). While Figure 2 shows some overlap in the plots of overall survival, a log-rank test reveals a highly significant difference between the four groups.

Figure 3:

Kaplan-Meier curves of overall survival by the treatment group for the IHC study. The treatment group corresponds to no treatment (black line), radiation only (blue line), surgery only (purple line) and combined treatment (gray line). The log-rank statistic for comparing the four groups is 258 and is distributed as chi-squared with three degrees of freedom under the null hypothesis of no difference in survival between the groups.

Because patients in the SEER registry are not randomized to treatment, there might be self-selection in patients’ choice of treatments, leading to confounding. I illustrate the methods in the paper by first comparing the combined treatment group to the rest. A proportional hazards model of overall survival on this binary treatment yields an estimated hazard ratio of 0.60 with an associated 95% confidence inteval of (0.52,.68). Thus, use of the combined treatment is associated with a 40% reduction in relative risk of death. I use the following variables as confounders in the analysis: age, stage of cancer, race and SEER registry location. If I adjust for these variables in the proportional hazards model, then the estimated hazard ratio of treatment changes to 0.62 with an associated 95% confidence inteval of (0.55,0.72).

The analyses in the previous paragraph used all 3894 observations. I now apply the margin methodology. To reiterate, this corresponds to the following three steps: (a) fit a support vector machine with combined treatment versus the rest as the outcome; (b) identify the margin observations and perform full matching; (c) fit a proportional hazards regression model of survival on treatment where the matched observations are treated as fixed strata, i.e. each matched set has a separate baseline hazard function in the Cox model, but the covariate effects are the same across matched sets. I used the svm function available in the e1071 library for step (a); we assumed default parameter settings. This analysis yielded an estimated hazard ratio of 0.42 with an associated 95% CI of (0.32,0.54). Thus, this approach leads to a larger effect size. What is key to note, however, is that we are no longer using all 3894 observations. This analysis uses 604 observations, so we have in effect discarded over 85% of the observations. While much data have been removed, the tradeoff is that the remaining observations better satisfy the treatment positivity assumption. A simulation exercise was performed where I explored the bootstrap distribution of the number of observations in the margin; the results are shown in Figure 3. What is seen here is that the modelling for the true effect relies on only 10-20% of the data. This underscores the fact that while more observations might be desirable from a statistical point of view, for causal effect estimation problems, a potentially key concept is the margin size.

Figure 4: Bootstrap distribution for the margin size based on the IHC data. The purple line denotes the margin size for the observed data (604 observations); the distribution is roughly centered around the observed value.

Next, I consider treatment as a four-level variable based on the four groups in Figure 1. If one fits a multicategorical SVM using all 6 pairs of classifiers, I find that the meta-margin only excludes observations from the no treatment group. Out of the 2333 IHC subjects who receive no treatment in the original dataset, 1188 are removed. I then ran analyses with all six possible pairwise treatment comparisons using a regression adjustment strategy versus the proposed methodology. They are summarized in Table 1. Based on the results, several findings obtain. First, there is no reduction in the number of observations for comparisons that do not involve the no treatment group. For those comparisons, the standard and proposed approaches give fairly concordant results. More pronounced differences are found in comparisons involving the no treatment group. In general, the proposed method leads to stronger effect estimates, although this comes as the cost of slightly increased variability. Combining radiation and surgery tends to lead to decreased risk of death relative to using either treament by itself.

Standard Proposed
Comparison HR (95% CI) n HR (95% CI) n
Radiation/No Treatment 0.63 (0.55,0.71) 2676 0.49 (0.41,0.59) 1488
Surgery/No Treatment 0.66 (0.60,0.72) 3289 0.64 (0.56, 0.73) 2101
Combined/No Treatment 0.51 (0.44, 0.59) 2595 0.37 (0.30, 0.46) 1407
Surgery/Radiation 1.00 (0.87,1.15) 1299 1.06 (0.88,1.26) 1299
Combined/Radiation 0.76 (0.65,0.87) 1218 0.60 (0.49,0.74) 1218
Combined/Surgery 0.68 (0.57,0.83) 605 0.70 (0.55,0.90) 605
Table 1:

Hazard ratios for pairwise treatment comparisons in IHC data using a regression-based PH model (Standard) as well as the margin-based approach (Proposed). HR represents hazard ratio for death, while 95% CI denotes the associated 95% confidence interval. For the comparison column, A/B represents a comparison between groups A and B with B denoting the reference group. For example, the ‘Radiation/No Treatment’ entry denotes comparing the radiation-treated group to the no treatment group, with the latter serving as the reference group.

As a final exploratory analysis, I created a variable based on being included in the margin and fit a classification tree model in order to determine what factors determined being in the margin. The first split is done on age, and if age is less than 63.5 years, then there is a 93% chance of being in the margin. This suggests that margin-based inferences are being made with respect to a younger population than what is represented in the entire IHC dataset.

4.2 Early Dieting in Girls Study

To illustrate the margin-based approach with a continuous treatment, we use data from the Early Dieting in Girls study, a longitudinal study in which mother-daughter dyads were followed at five time points. The study population comes from white non-Hispanic families living in central Pennsylvania. At each time point, measurements were taken, and the mothers and daughters were also interviewed. Broadly speaking, the goals of the study are to examine parental influences on daughters’ growth and development from ages 5 to 15; further details can be found in

[41], [34] and [42].

This analysis models the influence of mothers’ weight concern in year 2 of the study on their daughter’s body mass index at year 3 of the study. The treatment variable is mother’s overall weight concern which is measured when age 7. It is the average score of five questions in the questionnaire. A higher value implies the mother is more concerned about gaining weight. In the dataset, and its values range from 0 to 3.4. There were 21 potential baseline confounders considered in this study regarding participants’ characteristics, such as family history of diabetes and obesity, family income, daughter’s disinhibition, daughter’s body esteem, mother’s perception of mother’s current size and mother’s satisfaction with daughter’s current body. The margin-based approach will involve using support vector regression. I follow the same three-step procedure as described above. One issue that arises is that for the third step, one cannot use full matching as in the previous example. Instead, I adopt the approach from Zhu et al. [34] for performing causal inference with a continuous treatment:

  1. Fit on using generalized boosting [38] and for , and get and ;

  2. Calculate the residuals ; can be approximated by

  3. Compute stabilized weights for ,

    where denotes the estimated residual for observation using the null model (i.e., not involving any covariates ).

  4. . Run a regression of on using the stabilized weights.

Applying this approach to the full dataset ( observations) yields a causal effect estimate of with a standard error of

. This corresponds to a test statistic of

, which is marginally significant at the 0.05 level of significance. Using the margin-based approach with discards 21 observations, but the effect estimate changes to 0.41 with an associated standard error of 0.76, which is a non-signficant effect. Thus, there is a large number of observations that violate the relaxed covariate overlap condition and thus might also violate the treatment positivity condition.

5 Discussion

In this article, I have shown how the margin concept from machine learning provides a basis for estimating causal effects in a manner not requiring model extrapolation and leads to a natural three-step approach for causal inference. The margin identifies regions in the covariate space where this is overlap in the confounders between treatment groups. Areas where there is no covariate overlap violate key assumptions in causal inference, such as the treatment positivity assumption.

While the margin from support vector machines has been espoused in [11], there are several important differences between that work between what is proposed here. The use of SVMs naturally arise from consideration of the duality between covariate overlap with separating hyperplanes. Further, we only consider linear separating hyperplanes or equivalently, SVMs with a reproducing kernel Hilbert space corresponding to a linear kernel. By contrast, Ratkovic [11] proposes a hierarchical SVM model and attendant Bayesian inferential procedures for linear and nonlinear SVMs. There is no intuitive geometric covariate overlap notion for what Ratkovic [11] proposes in the nonlinear case, and he uses first-order conditions to argue for covariate balance in the margin. However, that work as well as the current paper argues for better understanding of the statistical properties of the margin. This is currently under investigation.

Implicit in the causal effect analyses is that inferences are done conditionally on finding the margin. As argued in other contexts (e.g., [43]), there exist many modes of performing inference in causal analyses. This issue should be explored further as well. In particular, the recent literature on post-model selection inference in [44] could potentially be extended to this setting.


This research is supported by a pilot grant from the Data Science to Patient Value (D2V) initiative from the University of Colorado. The author would like to thank Dr. Yeying Zhu and Dr. Nandita Mitra for providing the dieting and choliangocarcinoma datasets, respectively. The author would like to acknowledge an associate editor and referee, whose comments greatly improved the quality of the manuscript.


  • [1] Imbens GW, Rubin DB. Causal Inference for Statistics, Social and Biomedical Sciences: an Introduction. Cambridge: Cambridge University Press; 2015.
  • [2] Hansen B. Full matching in an observational study of coaching for the SAT. J Am Statist Assoc 2004; 99: 609 – 618.
  • [3] Stuart EA. Matching methods for causal inference: a review and a look forward. Statist Sci 2010; 25:1 – 21.
  • [4] Iacus SM, King G, Porro G. Multivariate matching methods that are monotonic imbalance bounding. J Am Statist Assoc 2011; 106: 345 – 361.
  • [5] Imai K, Ratkovic M. Covariate balancing propensity score. J Roy Statis Soc Ser B 2014; 76: 243 – 266.
  • [6] Chan KCG, Yam SCP, Zhang Z. Globally efficient nonparametric inference of average treatment effects by empirical balancing calibration weighting. J Roy Statis Soc Ser B 2015;
  • [7] Zubizarreta JR. Stable weights that balance covariates for estimation with incomplete outcome data. J Am Statist Assoc 2015; 110: 910 – 922.
  • [8] Crump RK, Hotz VJ, Imbens GW, Mitnik OA. Dealing with limited overlap in estimation of average treatment effects. Biometrika 2009; 96: 187 – 199.
  • [9] Traskin M, Small D. Defining the study population for an observational study to ensure sufficient overlap: a tree approach. Stat Biosci 2011; 3: 94-118.
  • [10] Cristianini N., Shawe-Taylor J. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge: Cambridge University Press; 2000.
  • [11] Ratkovic M. Balancing within the margin: causal effect estimation with support vector machines. Technical Report, Department of Politics, Princeton University, 2014.
  • [12] Ghosh D, Zhu Y, Coffman DS. Penalized regression procedures for variable selection in the potential outcomes framework. Stat Med 2015; 34: 1645 – 58.
  • [13] Neyman J. Sur les applications de la theorie des probabilites aux experiences agricoles: Essai des principes. Statist Sci 1990; 463-472.
  • [14] Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psych 1974; 66: 688 – 701.
  • [15] Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika 1983; 70: 41 – 55.
  • [16] Robins JM. Marginal structural models. In

    1997 Proceedings of the American Statistical Association, Section on Bayesian Statistical Science

    1 – 10.
  • [17] Ho DE, Imai K, King G, Stuart EA. Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Pol Anal 2007; 15: 199–236.
  • [18] Diamond A, Sekhon JS. Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Rev Econ Stat 2013; 95: 932–945.
  • [19]

    Robins, J.M. and Ritov, Y. Toward a Curse of Dimensionality Appropriate (CODA) asymptotic theory for semi-parametric models.

    Stat Med 1997; 16: 285 – 319.
  • [20] D’Amour, A., Ding, P., Feller, A., Lei, L. and Sekhon, J. (2017). Overlap in Observational Studies with High-Dimensional Covariates. Available at
  • [21] Gruber S, van der Laan MJ. An application of collaborative targeted maximum likelihood estimation in causal inference and genomics. Int J Biostat 2010; 6: Article 18.
  • [22] Luo W, Zhu Y, Ghosh D. On estimating regression causal effects using sufficient dimension reduction. Biometrika 2017; 104: 51 – 65.
  • [23] Petersen ML, Porter KE, Gruber S, Wang Y, van der Laan MJ. Diagnosing and responding to violations in the positivity assumption. Stat Methods Med Res 2012; 21: 31 – 54.
  • [24] Schneider R. Convex bodies: The Brunn-Minkowski theory: Encyclopedia of Mathematics and its Applications. Cambridge: Cambridge University Press; 1993.
  • [25] Grünbaum B. Convex Polytopes. New York: Springer; 2003.
  • [26] Chazelle B. An optimal convex hull algorithm in any fixed dimension. Discrete Comput Geom 1993; 10: 377 – 409.
  • [27] Vapnik VN.

    The Nature of Statistical Learning Theory

    . New York: Springer-Verlag; 1995.
  • [28] Rosenbaum P. Design of Observational Studies. New York: Springer; 2010.
  • [29] Rubin DB. The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials. Stat Med 2007; 26: 20 – 36.
  • [30] Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Stat Med 2004; 23: 2937 – 2960.
  • [31] van der Laan MJ, Rose S. Targeted Learning: Causal Inference for Observational and Experimental Data. New York: Springer; 2011.
  • [32] Imbens GW. The role of the propensity score in estimating dose-response functions. Biometrika 2000; 87: 706 – 710.
  • [33] Imai K, Van Dyk DA. Causal inference with general treatment regimes. J Am Statist Assoc 2004; 99: 854 – 866.
  • [34] Zhu Y, Coffman DS, Ghosh D. A boosting algorithm for estimating generalized propensity scores with continuous treatments. J Causal Inf 2015; 3: 25 – 40.
  • [35] Bi J, Bennett K. A geometric approach to support vector regression. Neurocomputing 2003; 55: 79 – 108.
  • [36] Wahba G. Spline Models for Observational Data. Philadelphia: SIAM; 1990.
  • [37] Mangasarian OL. Nonlinear Programming. New York: McGraw-Hill; 1994.
  • [38] Ridgeway G. The state of boosting. Comp Sci Stat 1999; 31: 172 – 181.
  • [39] Platt RW, Delaney JA, Suissa S. The positivity assumption and marginal structural models: the example of warfarin use and risk of bleeding. Eur J Epidemiol 2012; 27: 77 – 83.
  • [40] Shinohara ET, Mitra N, Guo M, Metz JM. Radiation Treatment is associated with improved survival in the adjuvant and definitive treatment of intrahepatic cholangiocarcinoma. Int J Radiat Oncol Biol Phys 2008;72:1495-501.
  • [41] Fisher JO, Birch LL. Eating in the absence of hunger and overweight in girls from 5 to 7 y of age. Am J Clin Nutr 2002;76:226 – 31.
  • [42] Zhu Y, Ghosh D, Coffman DL, Savage JS. Estimating controlled direct effects of restrictivefeeding practices in the ’Early dieting in girls’ study. J R Stat Soc Ser C Appl Stat 2016; 65:115 – 130.
  • [43] Rubin DB. Practical implications of modes of statistical inference for causal effects and the critical role of the assignment mechanism. Biometrics 1991; 47: 1213 – 1234.
  • [44] Lee JD, Sun DL, Sun Y, Taylor JE. Exact post-selection inference, with application to the lasso. Ann Statist 2016; 44: 907–927.