As massive and varied amounts of observational data are accumulating in healthcare, internet marketing, and governance, these data are increasingly used for understanding important cause-and-effect relationships. We might want to know whether a policy causes people to use fewer public services, or we might want to know whether a particular drug causes a side effect, or whether a view of an internet advertisement results in an increased chance of purchase. Controlled trials with randomized treatment and control populations are often small and expensive, and not often possible due to the ethics of offering the treatment, or perhaps not possible due to the fact that the treatment happened only in the past. Observational Inference addresses this issue by supposing that treatment is administered based exclusively on a set of attributes related to both outcomes and treatments, such that when the values of these attributes are held fixed, then treatment is assigned uniformly at random.
. Practically, this could be a major flaw in the current paradigm. Choosing a single fixed match ignores a major source of uncertainty, which is the design itself, or in other words, the uncertainty related to the choice of experimenter. What if there were two possible equally good matches, one where the treatment effect estimate is very strong and one where it is nonexistent? When we report a result on a particular matching assignment, we thus ignore the possibility of the opposite result occurring on an equally good assignment. It is entirely possible that two separate researchers studying the same effect on the same data, using two different equally good sets of pairs, would get results that disagree. The use of matching for covariate adjustments is often advocated in place of parametric models(Ho et al. 2007) because estimates with the former do not depend on arbitrary assumptions about the parametric form of the outcome distribution. Unfortunately, variation in estimates due to arbitrary choice of matching method injects the same type of uncertainty into the results that matching was supposed to remove in the first place. The problem of model choice becomes the problem of match choice.
Our goal is to create robust matched pairs hypothesis tests for observational data. These tests implicitly consider all possible reasonably good assignments and consider the range of possible outcomes for tests on these data. This is a more computationally demanding approach to hypothesis testing than the standard approach where one considers just a single assignment, but the result is then more robust to the choice of experimenter. It is computationally infeasible (and perhaps not very enlightening) to explicitly compute all possible assignments, but it is possible to look at the range of outcomes associated with them. In particular, our algorithms compute the maximum and minimum of quantities like the test statistic values and their associated -values.
Finding a set of matched pairs that obey certain conditions is purely a data mining problem. Similar subfields of data mining, where the goal is to locate optimal subsets of data, include association rule mining and event detection. For these types of problems, modern discrete optimization techniques have rarely been used, though there is one recent precedent in the literature for matched pairs, namely the line of work by Zubizarreta (2012), Zubizarreta et al. (2013, 2014)
. Optimization techniques have major advantages over other types of approaches: (i) they are extremely flexible and allow the experimenter to match on very complex conditions, such as quantiles of attributes, which network optimization methods for matching cannot handle; (ii) they can be computationally efficient, depending on the strength of the integer programming formulation – strong formulations have relaxations close to the set of feasible integer solutions, and (iii) mixed-integer linear programs have guarantees on the optimality of the solution – in fact, they produce upper and lower bounds on the value of the optimal solution.
After formalization of our framework in Section 2, we offer a robust formulation for McNemar’s statistic for binary outcomes in In Section 3. We introduce a general Integer Linear Program for general matching problems, and a non-ILP algorithm time algorithm for matching problems with a particular type of constraint. We then give methods to compute p-values for this test under several null hypotheses as well as conceptions of matching. We provide exact formulas for randomization and conditional distributions of the statistic.
In Section 4 we outline an ILP to compute the robust version of the canonical -test in a general case. We offer a linearized formulation for the program that makes it solvable with any popular ILP solver. Finally, we present two case studies that apply our robust statistics to real-world datasets in Section 5.
2 Matching for Robust Tests
Throughout this paper we adopt the potential outcomes framework (see Holland 1986, Rubin 1974). For each unit , we have potential outcomes , where . As is standard in causal inference settings, there are units that receive the treatment and units that receive the control condition, with . We denote the condition administered to each unit with . We never observe realizations of and for each unit at the same time, but only of . To conduct our hypothesis test we have observations: , and , with being a set of covariates that take value in some discrete set . We make the classical assumptions of Conditional Ignorability of treatment assignment and Stable Unit Treatment Value (SUTVA): (Conditional Ignorability) For any two units, , we assume that: , where is some function mapping from covariate space, to . (SUTVA) For all units, : . A matching operator determines which control is assigned to which treatment. In this paper we focus on one-to-one matching, therefore we define the matching operator as follows. (Matching Operator) A matching operator obeys the following: if and then . That is, no two treatment units and are assigned to the same control unit. We define the size of the matching, i.e: the number of matched pairs, to be , with representing the indicator function for the event throughout the paper. The set of all matching operators is set of all possible assignments . Throughout the rest of the paper we slightly abuse notation by also using to represent a matrix of matches such that if and are matched together.
Throughout the paper we will be interested in testing the following null hypotheses, starting from hypotheses on the population of experimental units, with outcomes, treatment indicators and covariates respectively denoted by :
The first two are null hypotheses of 0 average treatment effects, one on the whole sample, and the other only on the treated units. All these tests presuppose potential outcomes
to be random variables drawn from some distribution, we also consider randomization inference for finite samples, in this case potential outcomes are assumed to be nonrandom and fixed at the same value under any treatment assignment:
This is Fisher’s classic sharp null hypothesis of no treatment effect, and while it is a more stringent null hypothesis than the hypothesis of no average treatment effect, it is widely used in experimental and non-experimental research today. This hypotheses, unlike the prior ones, presupposes potential outcomes to be fixed and nonrandom.
For now, consider a generic test statistic , where . This particular notation for is used to emphasize the dependence of the test statistic on the matches, , by denoting that is always evaluated on the same dataset , but with possibly varying matches.
2.1 Matching Analyses and Their Sensitivity to Arbitrary Choices
Existing studies that use matching for hypothesis testing all roughly follow the following template:
Choose a test statistic
Define criteria that acceptable matches should satisfy (e.g. covariate balance)
Apply one or several matching algorithms to the data, generating several sets of matches that satisfy the previously defined criteria
Arbitrarily choose one among all the matches that satisfy the requirements (e.g. best balance)
Compute the test statistic and its p-value on the matched data
This procedure is simple and computationally attractive, however it does not explicitly incorporate uncertainty arising from step 4: it is the case that values of the test statistic computed under different but equally good matches could differ. Since these matches all satisfy the quality requirements imposed by the analyst, they are indistinguishable from one another and so are the test statistic values resulting from them.
Each of the algorithms that could be used for matching could also be choosing among the different, equally good solutions that it finds arbitrarily or basing on arbitrary choices of the analyst that are commonly thought not to influence results.
|Algorithm||Advantage||Sensitivity to arbitrary choices|
|Greedy NN-Matching||Fast||Yes (e.g: affected by arbitrary transformations|
|of the data such as permutations)|
|Optimal Matching||Optimal||Yes but Moderate (e.g: arbitrary choice|
|(Rosenbaum 1989)||among many possible nearly optimal solutions)|
|Optimal With Selection||Optimal for both groups||Yes (same problem as Optimal matching|
|(Rosenbaum 2012)||with larger decision space)|
All of the algorithms in Table 1 suffer from this issue: sensitivity to seemingly arbitrary choices by the experimenter that go unaccounted for in the regular test statistic and p-value calculations. Consider the popular greedy Nearest-Neighbor matching procedure: the order in which observations are matched matters for the resulting assignment as observations that are matched earlier will have a greater pool of potential matches to choose from. Optimal assignment algorithms are subject to arbitrary implementation choices and could have to choose among multiple optima and near-optima, and most often will do so arbitrarily.
A natural question to ask at this point is whether the issue raised above affects existing results of matching studies or is actually mitigated in practice. Some evidence is provided by Morucci and Rudin (2018), who offer empirical evidence of the extent of this problem by replicating several social science studies that use matching and performing the same hypothesis tests with several popular matching algorithms, showing that hypothesis test results are influenced by choice of matching method.
2.2 Proposed Approach
We propose to use the range of test statistics produced by good matches as a test statistic itself. First we define a set of good assignments as where all obey quality constraints besides those in . intuitively represents the set of assignments arising from all reasonable matching methods, we refer to a match in as a good assignment.
Then, the procedure is as follows:
Robust Procedure: Over all assignments in , choose the two that respectively minimize and maximize the value of the test statistic of interest, and compute the maximal and minimal value of the test statistic:
The range between and includes, by definition, all the possible values of that could be obtained on data by changing the matching assignment. In this sense, the pair is robust to analyst choice of matching method, and to arbitrary choices made by the matching algorithm itself. If both test statistics successfully reject the null hypothesis of interest, then it can be concluded that the null can be rejected robustly and independently of the chosen matching method.
Denoting the test statistic as a random variable , and the actual computed realization in the data with , several kinds of robust p-values can be computed by the joint and marginal distributions of under any of the hypotheses of interest.
This robust approach is justified as follows: First, we do not want to consider how likely a certain assignment is to appear. This would involve modeling human behavior of the experimenter, or arbitrary and complex choices of different matching algorithms. We do not want to place a distribution over the choice of algorithms or matches that an experimenter would choose. As Morgan and Winship (2007) note, there is no clear guidance on the choice of matching procedure. We do not presuppose a distribution over these procedures. In reality most researchers tend to use popular and widely-cited matching software packages, and they make this choice independently of the specific problem being studied. Second, we do not want to consider statistics of the set of , such as the average value of over the set of . This is simply a special case of the previous point, where we assume that all good assignments are equally likely to be chosen, which is clearly not the case. Third, there is no sound statistical theory underpinning the idea that results averaged over a set of good matches should be more accurate than results from one single match. Fourth, the idea that our tests might lead to results that are too extreme is fundamentally inaccurate, as there is no reason why these results should be considered outlying or extreme even if most of the matches in lead to similar results that are different than those at
. Note also that considering a result output by one of the extrema as an “outlier” in the sense that it is believed that most other results with good matches should be concentrated elsewhere in the space ofis circular logic: if a set of assignments is included in , then it must be thought to be good enough according to the analyst’s criteria. Excluding this assignment because it produces values of that are too extreme, is tantamount to excluding observations from the dataset because they are not producing the desired result.
2.3 Computing Null Distributions for Robust Test Statistics
Here we discuss two different conceptions of the role of matching in observational causal inference that give rise of two different types of distributions for our test statistics.
Matching as preprocessing (MaP) This is the approach of Ho et al. (2007) and Rosenbaum (2010), who assume that matching is a deterministic, preprocessing step. Our goal is to approximate a draw from a target distribution, , but we cannot estimate this distribution as we never observe and jointly. Matching, together with assumptions on the form of , allows us to solve this problem by treating a sample of matched pairs as an approximate draw from . In this approach the matched data is then analyzed as if it were drew directly from . Test statistics obtained with any match in are treated as if computed on a sample from : at no point in this process does the unmatched data and the matching algorithm explicitly enter the uncertainty computation. Because of the nature of this approach, the null distribution of the test statistic is computed post-matching, and is always the same under any of the matches in . In this sense, the range of p-values produced in this way is a Hacking Interval (Coker et al. 2018), a non-probabilistic interval of summary statistics (such as p-values) that could be derived from any manipulation of the data. Given that matching is a simple preprocessing step in this framework, choice of matches in is always deterministic. Once we have chosen the set of matches that we think best approximates a draw from , then any test statistic computed on it has distribution independent of the matches.
Matching as Estimator (MaE): In this case matching is taken to fully be part of the data generating process that leads to the observed value of the test statistic. This approach is taken by Abadie and Imbens (2006). In this case we still assume the matched data to represent a random draw from our target distribution of interest,
, however we take into full account the initial distribution that the unmatched data were drawn from. While in the case of MaP we assumed that matched data comes from a certain statistical experiment, in this case we assume that data is randomly drawn before matching. This implies that the distribution of the statistic is affected by the matching procedure, and as such, standard error and p-value calculations are affected by the matching procedure. For this problem we focus on the testingand . For binary data, we will consider null distributions for under both hypotheses that take into consideration additional uncertainty stemming from matches being made post-treatment, as well as strategies for computing one and two-tailed p-values.
2.4 Constraining the Quality of the Matches
Determining the quality requirements that matches should satisfy to be usable is ultimately up to the experimenter, however there are several general types of constraints that matches in should most often obey. Let be a metric on the space of , some of these constraints are:
(Calipers) When then .
(Covariate balance, mean of chosen treatment units similar to mean of control group) we have:
(Maximizing the fitness of the matches) In general, one can optimize any measure of user-defined fitness for the assignment (including those defined above), and then constrain to include all other feasible assignments at or near that fitness level, by including the following constraints:
where Maxfit is precomputed as: If one desires the range of results for all maximally fit pairs and no other pairs, can be set to 0.
In our examples we use calipers mostly for ease of notation, but replacing constraints with those above (or other constraints) is trivially simple using MIP software.
A special type of constraints are Exclusively Binning Constraints: these are constraints that can be expressed as a division of the data into strata. These strata are sets of observations, such that one unit belongs exclusively to one set, and if exclusively units within the same stratum are matched together, then resulting matches satisfy all the constraints. More formally, we define exclusively binning constraints as follows: (Exclusively Binning Constraints) The constraints on are exclusively binning if there exists a partition of such that:
for some subset .
This type of grouping is commonly referred to as blocking (Imai et al. 2008), and there already exist matching methods that openly adopt it as a strategy to construct good quality matches (Iacus et al. 2012). In practice, several types of constraints on the quality of matches, chiefly balance, as defined in Equation (5), can be implemented as a coarsening choice on the covariates (Iacus et al. 2011), and as such as exclusively binning constraints.
In what follows, we provide special cases of the Robust Procedure for two specific hypothesis tests, McNemar’s test for binary outcomes and the -test for real-valued outcomes. We outline strategies for computing these statistics as well as their distributions under the hypotheses of interest. We begin with McNemar’s test.
3 Robust McNemar’s Test
We begin with the problem of hypothesis testing for binary outcomes, that is . Let be the count of matched pairs in which the treated unit has outcome 1 and the control unit has outcome 0 under assignment , and let be the number of matched pairs in which the treated unit has outcome 0 and the control unit has outcome 1. We choose to use the following test statistic for both hypotheses:
We correct the denominator by adding one to the count of pairs, as it may happen that a matched set exists such that in cases in which matches can be made in ways that allow that. This is the only difference between our formulation of the test statistic and its classical formulation (see Tamhane and Dunlop 2000). The robust test statistic can be defined as the pair:
In what follows, we outline two strategies to compute subject to the constraints that define , one when the constraints are very general, and one when the constraints are exclusively binning. The first relies on common mixed integer programming techniques and is flexible enough to be implemented with any common MIP solver. The second exploits the form of the exclusively binning constraints to solve the problem with a fast polynomial time algorithm. Under MaP both algorithms can be used to test all of the hypotheses above, while under MaE, we provide exact expressions for the distribution of under in Section 3.3 and with a fixed number of treated units in Section 3.4.
3.1 Optimizing McNemar’s statistic with general constraints
In this section we give an Integer Linear Program (ILP) formulation that optimizes with a pre-defined number of matches . This formulation can be adapted for testing either or under MaP by changing the number of matches made, and can be used to test with any number of matches.
One can show that the number of pairs where the same outcome is realized for treatment and control is irrelevant, so we allow it to be chosen arbitrarily, with no constraints or variables defining it. The total number of pairs is also not relevant for this test. Therefore, we choose only the total number of untied responses ( and -pairs) . The problem can be solved either for a specific or globally for all by looping over all possible values of until the problem becomes infeasible. In most practical data analysis scenarios the largest feasible is chosen.
Formulation 1: General ILP Formulation for McNemar’s Test
|(Total number of first type of discordant pairs)||(7)|
|(Total number of second type of discordant pairs)||(8)|
|(Total number of discordant pairs)||(9)|
|(Additional user-defined match quality constraints.)|
Equations (7) and (8) are used to define variables and . To control the total number of untied responses, we incorporate Equation (9). Equations (10) and (11) confirm that only one treated/control unit will be assigned in a single pair. As it is common practice in matching, hypothesis can be tested by adding constraint , only in the case where , that is matching all of the treatment units, while , can be tested with any reasonable number of matches.
3.2 Robust McNemar’s Test With Exclusively Binning Constraints.
If the constraints on are exclusively binning, then can be optimized quickly and easily in both directions if the partition is constructed first. Before showing this, we introduce an IP formulation for McNemar’s test with exclusively binning constraints:
Formulation 2: IP formulation for McNemar’s test with exclusively binning constraints
|(Total number of first type of discordant pairs)||(14)|
|(Total number of second type of discordant pairs)||(15)|
|(Additional user-defined exclusively binning constraints.)|
Note that analysts can remove this part of the formulation and introduce a fixed number of matches as constraint in a way similar to Constraint (9) if desired.
This problem can be solved in linear time and without one of the canonical IP solution methods by using the fact that the strata defined by the exclusively binning constraints can each be optimized separately, once the direction of the resulting statistic is known. In stratum there will be treated units with outcome , control units with outcome , treatment units with outcome 0, and control units with outcome 1. This is summarized in Figure 1.
1 0 1 0
To ensure that as many units as possible are matched, within each stratum we make exactly matches. We would like to make the matches within each stratum such that is either maximized or minimized. Algorithm 1 maximizes ,
and an exactly symmetric algorithm for minimizing is given in Appendix 7. As it is clear from their definitions, these algorithms do not require any of the conventional MIP solving techniques and as such are much faster: the running time of Algorithm 1 is clearly linear in , the number of units. This constitutes a substantial speed up over solving the problem with a regular MIP solver. The following theorem states the correctness of the algorithm for solving Formulation 2: (Correctness of Algorithm 1) Algorithm 1 globally solves Formulation 2 for the Maximal value of .
Proof. See Appendix 7. ∎
In the following sections, we offer exact formulas and computational algorithms for the distribution of the test statistic values obtained with the results in this section, and when matches are considered as part of the statistic.
3.3 Randomization Distribution of Under Sharp Null Hypothesis and Exclusively Binning Constraints
Now that we can compute on a given dataset we must derive a formula for its distribution under , so that we can test the hypothesis in a MaE framework, that is when the matching procedure contributes to the uncertainty in the final value of the statistic of interest. Recall that is Fisher’s sharp null hypothesis of no treatment effect, that is . In this section we provide an exact expression for the joint null distribution of the test statistics under for the case in which constraints are exclusively binning and
can be constructed prior to matching. As is standard in randomization inference, we hold potential outcomes for all of the units in our sample to be fixed and nonrandom: the only randomness in our calculation will stem from the treatment assignment distribution. Note that all of the probability statements that follow are made conditional onand, for simplicity, we omit the relevant conditional notation from them.
Our approach to constructing this randomization distribution is as follows: we assume that any set of units that could be matched together under a good assignment has the same probability of receiving the treatment, and that, after treatment is assigned, matches are made to compute with the program in Formulation 2.
Since all good matches are made within the same stratum, we can assume that units within the same stratum receive the treatment with equal probability. This implies that Assumption 2 becomes: (Stratified Conditional Ignorability) For all : , with . Under Assumption 3.3 and under , the data generating process for the count variables in each stratum is as follows:
where denotes the number of units in stratum that have denotes the number of units in the same stratum with potential outcomes , and . Note that we now do not know how many units are treated and untreated in each stratum with certainty: this is equivalent to assuming that Bernoulli trials are performed within each stratum. In order to compute the distribution of under an estimate of is needed for each stratum. Under assumption 3.3 this probability can estimated consistently and in an unbiased way with , or other equivalent estimators.
For convenience, we also introduce truncated versions of the variables above, which explicitly take into account the fact that exactly matches are made within each stratum, letting and :
We now introduce a randomization distribution of the truncated variables in any stratum. Note that, by definition, these distributions are independent from one another, which is something that we will use in deriving an expression for the distribution of . As a reminder, denotes the number of matches made within each stratum. The theorems below can appear notationally complex, however they lend themselves well to implementation for computing the distributions of interest. (Randomization Distribution of Truncated Variables) For let be fixed and known and let be drawn from the data generating process of equations (21) – (26). Let be elements of and let . The variables (, , , ,
) have the following joint distribution:
and and .
Proof See Appendix 8.2. ∎
Note that, while the domain of the distribution above cannot be expressed by a simpler formula, it is simple and computationally fast to enumerate it for finite . Finally, the joint null distribution of is given in the following theorem: (Randomization Distribution of ). For let be fixed and known and let data, , be drawn from the data generating process of Equations (21) – (26). Let be the maximum of Formulation 2 on , and let be the minimum of the problem on the same variables. Let and define: , Additionally define:
Let be the Cartesian product of the sets above, such that each element of is a 4-tuple of vectors: . Let ; the pmf of , for two values is: