Fairness Through Causal Awareness: Learning Latent-Variable Models for Biased Data

09/07/2018 ∙ by David Madras, et al. ∙ UNIVERSITY OF TORONTO 4

How do we learn from biased data? Historical datasets often reflect historical prejudices; sensitive or protected attributes may affect the observed treatments and outcomes. Classification algorithms tasked with predicting outcomes accurately from these datasets tend to replicate these biases. We advocate a causal modeling approach to learning from biased data and reframe fair classification as an intervention problem. We propose a causal model in which the sensitive attribute confounds both the treatment and the outcome. Building on prior work in deep learning and generative modeling, we describe how to learn the parameters of this causal model from observational data alone, even in the presence of unobserved confounders. We show experimentally that fairness-aware causal modeling provides better estimates of the causal effects between the sensitive attribute, the treatment, and the outcome. We further present evidence that estimating these causal effects can help us to learn policies which are both more accurate and fair, when presented with a historically biased dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In this work, we consider the problem of fair decision-making from biased datasets. Much work has been done recently on the problem of fair classification (Zafar et al., 2017; Hardt et al., 2016; Bechavod and Ligett, 2017; Agarwal et al., 2018)

, yielding an abundant supply of definitions, models, and algorithms for the purposes of learning classifiers whose outputs satisfy distributional constraints. Some of the canonical problems for which these algorithms have been proposed are loan assignment

(Hardt et al., 2016), criminal risk assessment (Chouldechova, 2017), and school admissions (Friedler et al., 2016). However, none of these problems are fully specified by the classification paradigm. Rather, they are decision-making problems: each problem requires an action (or “treatment”) to be taken in the world, which in turn yields an outcome. In other words, the central question is how to intervene in an ongoing and evolving process, rather than predict outcomes alone (Barabas et al., 2018).

Decision-making, i.e. learning to intervene, requires a fundamentally different approach from learning to classify: historical training data are the product of past interventions and thus provide an incomplete view of all possible outcomes. Only actions which were previously chosen yield observable outcomes in the training data, while the implicit counterfactual outcomes (the outcome that would have occurred had another action been taken) are never observed. The incompleteness of this data can have great impact on learning and inference (Rubin, 1976).

It has been widely argued that biased data yields unfair machine learning systems (Kallus and Zhou, 2018; Hashimoto et al., 2018; Ensign et al., 2018). In this work we examine dataset bias through the lens of causal inference. To understand how past decisions may bias a dataset, we first must understand how sensitive attributes may have affected the generative process which created the dataset, including the (historical) decision makers’ actions (treatments) and results (outcomes). Causal inference is well suited to this task: since we are interested in decision-making rather than classification, we should be interested in the causal effects of actions rather than correlations. Causal inference has the added benefit of answering counterfactual queries: What would this outcome have been under another treatment? How would the outcome change if the sensitive attribute were changed, all else being equal? These questions are core to the mission of learning fair systems which aim to inform decision-making (Kusner et al., 2017).

While there is much that causal inference can offer to the field of fair machine learning, it also poses several significant challenges. For example, the presence of hidden confounders—unobserved factors that effect both the historical choice of treatment and the outcome—often prohibits the exact inference of causal effects. Additionally, understanding effects at the individual level can be especially complex, particularly if the outcome is non-linear in the data and treatments. These technical difficulties are often amplified by the problem scope of modern machine learning, where large and high-dimensional datasets are commonplace.

To address these challenges, we propose a model for fairly estimating individual-level causal effects from biased data, which combines causal modeling (Pearl, 2009) with approximate inference in deep latent variable models (Kucukelbir et al., 2017; Louizos et al., 2017). Our focus on individual-level causal effects and counterfactuals provides a natural fit for application areas requiring fair policies and treatments for individuals, such as finance, medicine, and law. Specifically, we incorporate the sensitive attribute into our model as a confounding factor, which can possibly influence both the treatment and the outcome. This is a first step towards achieving “fairness through awareness” (Dwork et al., 2012) in the interventional setting.

Our model also leverages recent advances in deep latent-variable modeling to model potential hidden confounders as well as complex, non-linear functions between variables, which greatly increases the class of relationships which it can represent. Through experimental analysis, we show that our model can outperform non-causal models, as well as causal models which do not consider the sensitive attribute as a confounder. We further explore the performance of this model, showing that fair-aware causal modeling can lead to more accurate, fairer policies in decision-making systems.

2. Background

2.1. Causal Inference

We employ Structural Causal Models (SCMs), which provide a general theory for modeling causal relationships between variables (Pearl, 2009). An SCM is defined by a directed graph, containing vertices and edges, which respectively represent variables in the world and their pairwise causal relationships. There are two types of vertices: exogenous variables and endogenous variables . Exogenous variables are unspecified by the model; we model them as unexplained noise distributions, and they have no parents. Endogenous variables are the objects we wish to understand; they are descendants of endogenous variables. The value of each endogenous variable is fully determined by its ancestors. Each has some function which maps the values of its immediate parents to its own. This function is deterministic; any randomness in an SCM is due to its exogenous variables.

In this paper, we are primarily concerned with three endogenous variables in particular: , the observable features (or covariates) of some example; , a treatment which is applied to an example; and , the outcome of a treatment. Our decision problem is: given an example with particular values for its features, , what value should we assign to treatment in order to produce the best outcome ? This is fundamentally different from a classification problem, since typically we observe the result of only one treatment per example 111 Note that we use the terms treatment and outcome as general descriptors of a decision made/action taken and its result, respectively. These terms are associated with an alternative theory of causal inference (Rubin, 2005) which can also be used to describe the methods we propose, but which we will not discuss in this paper. .

To answer this decision problem, we need to understand the value will take if we intervene on and set it to value . Our first instinct may be to estimate

. However, this is unsatisfactory in general. If we are estimating these probabilities from observational data, then the fact that

received treatment in the past may have some correlation with the historical outcome . This “confounding” effect—the fact that has an effect on both and is depicted in Figure 0(a), by the arrows pointing out of into and

. For instance, in an observational medical trial, it is possible that young people are more likely to choose a treatment, and also that young people are more likely to recover. A supervised learning model, given this data, may then overestimate the average effectiveness of the treatment on a test population. Broadly, to understand the effect of assigning treatment

, supervised learning is not enough; we need to model the functions of the SCM.

Once we have a fully defined SCM, we can use the operation (Pearl, 2009) to simulate the distribution over given that we assign some treatment —we denote this as . We do the through graph surgery: we assign the value to by removing all arrows going into from the SCM and setting the corresponding structural equation output to the desired value regardless of its input . We then set and continue with inference of as we normally would.

A common assumption in causal modelling is the “no hidden confounders” assumption, which states that there are no unobserved variables affecting both the treatment and outcome. We follow Louizos et al. (2017), and use variational inference to model confounders that are not directly observed but can be abstracted from proxies. In Sec. 4 we consider the implications of this approach and discuss alternative assumptions.

2.2. Approximate Inference

Individual and population-level causal effects can be estimated via the do operation when the values of all confounding variables are observed (Pearl, 2009), which motivates the common no-hidden-confounders assumption in causal inference. However this assumption is rather strong and precludes classical causal inference in many situations relevant to fair machine learning, e.g., where ill-quantified and hard-to-observe factors such socio-economic status (SES) may significantly confound the observable data. Therefore we follow Louizos et al. (2017) in modeling unobserved confounders using a high dimensional latent variable to be inferred for each observation

. They prove that if the full joint distribution is successfully recovered, individual treatment effects are identifiable, even in the presence of hidden confounders. In other words, causal effects are identifiable insofar as exact inference can be carried out, and the observed covariates are sufficiently informative.

Because exact inference of is intractable for many interesting models, we approximately infer by variational inference, specifying using a parametric family of distributions and learning the parameters that best approximate the true posterior by maximizing the evidence lower bound (ELBO) of the marginal data likelihood (Wainwright et al., 2008). In particular, we amortize inference by training a neural network (whose functional form is specified separately from the causal model) to predict the parameters of given (Kingma and Welling, 2014). Amortized inference is much faster but less optimal than local inference (Kim et al., 2018); alternate inference strategies could be explored for applications where the importance of accuracy in individual estimation justifies the additional computational cost.

2.3. TARNets

TARNets (Shalit et al., 2017) are a class of neural network architectures for estimating outcomes of a binary treatment. The network comprises two separate arms—each predicts the outcomes associated with a separate treatment—that share parameters in the lower layers. The entire network is trained end to end using gradient-based optimization, but with only one arm (the one with the treatment which was actually given) receiving error signal for any given example. The TARNet prediction of result and input variables and potential intervention is expressed by combining the shared representation function with the two functions corresponding to the separate prediction arms. This yields two composed functions,

(1)

with realized as neural networks. Shalit et al. (2017) explore a group-wise MMD penalty on the outputs of ; we do not use this.

3. Fair Causal Inference

As stated in Sec. 2.1, we are interested in modeling the causal effects of treatments on outcomes. However, when attempting to learn fairly from a biased dataset, this problem takes on an extra dimension. In this context, we become concerned with understanding causal effects in the presence of a sensitive attribute (or protected attribute). Examples include race, gender, age, or SES. When learning from a historical data, we may believe that one of these attributes affected the observable treatments and outcomes, resulting in a biased dataset.

Lum and Isaac (2016) give an example in the domain of predictive policing of how a dataset of drug crimes may become biased with respect to race through unfair policing practices. They note that it is impossible to collect a dataset of all drug crimes in some area; rather, these datasets are really tracking drug arrests. Due to a higher level of police presence in heavily Black than heavily White communities, recorded drug arrests will by nature over-represent Black communities. Therefore, a predictive policing algorithm which attempts to fit this data will continue the pattern of over-policing Black communities. Lum and Isaac (2016) provide experimental validation of this hypothesis through simulation, contrasting the output of a common predictive policing algorithm with independent, demographic-based estimates of drug use by neighborhood. Their work shows that wrongly specifying a learning problem as one of supervised classification can lead to replicating past biases. In order to account for this in the learning process, we should be aware of the biases which shaped the data — which may include sensitive attributes that historically affected the treatment and/or outcome.

Using the above example for concreteness, we specify the variables at play. The decision-making problem is: should police be sent to neighborhood at a given time? The variables are:

  • : a sensitive attribute. For example the majority race of a neighborhood.

  • : a treatment. For example the presence or absence of police in a certain neighborhood on a particular day.

  • : an outcome. For example the number of arrests recorded in a given neighborhood on a particular day.

  • : -dimensional observed features. For example statistics about the neighborhood, which may change day-to-day

We will represent sensitive attributes and treatments as binary throughout this paper; we recognize this is not always an optimal modeling choice in practice. Note that the choice of treatment will causally alter the outcome—an arrest cannot occur if there are no police in the area. Furthermore, the sensitive attribute can causally effect the outcome as well; research has shown that policing can disparately effect various races, even controlling for police presence (Gelman et al., 2007) (the treatment in this case).

We note that in various domains, there may be more variables of interest than the ones we list here, and more appropriate causal models than those shown in Fig. 1. However, we believe that the setup we describe is widely applicable and contains the minimal set of variables to be useful for fairness-aware causal analysis. We are interested in calculating causal effects between the above variables. In particular, we seek answers to the following three questions:

What is the effect of the treatment on the outcome?

This will help us to understand which is likely to produce a favorable outcome for a given . Let us denote  as the expected conditional outcome under , that is, the ground truth value taken by when the treatment is assigned the value , and conditioning on the values for the features and sensitive attribute respectively. Then, we can express the individual effect of on () as

(2)
What is the effect of the sensitive attribute on the treatment?

This allows us to understand how the treatment assignment was biased in the data. Similarly, we can define , which is the expected conditional treatment in the historical data when the value is assigned to the sensitive attribute. Then, the individual effect of on can be expressed as

(3)
What is the effect of the sensitive attribute on the outcome?

This allows us to understand what bias is introduced into the historically observed outcome. We can also define as the expected conditional outcome under ; the ground truth value of conditioned on the features being if the sensitive attribute were assigned the value , and the treatment were assigned the ground truth value . Then, we can express the individual effect of on as

(4)

3.1. Intervening on Sensitive Attributes

There has been some disagreement around the notion of intervening on an immutable (or effectively immutable) sensitive attribute. Holland (1986) argue that there is “no causation without manipulation” — i.e. an attribute can never be a cause; only an experience undergone can be. Briefly stated, they argue that if the factual and counterfactual versions cannot be “defined in principle, it is impossible to define the causal effect”. In a counterargument, Marini and Singer (1988) claim that a “synthesis of intrinsic and extrinsic determination [provides] a more adequate picture of causal relations” — meaning that both externally imposed experiences (extrinsic) and internally defined attributes (intrinsic) are valid conceptual components of a theory of causation. We agree with this view — that the notion of a causal effect of an immutable attribute is valid, and believe that it is particularly useful in a fairness context.

Specifically pertaining to race, some argue it is possible to understand the causal effect of an immutable attribute in terms of the effects of more manipulable attributes (proxies). VanderWeele and Robinson (2014) argue that, rather than interpreting a causal effect estimate of as a hypothetical randomized intervention on , one can interpret it as a particular type of intervention on some other set of manipulable variables related to (under certain graphical and distributional assumptions on those variables). Sen and Wasow (2016) take a constructivist approach, and consider race to be composed of constituent parts, some of which can be theoretically manipulated. They describe several experimental designs which could estimate the effects of immutable attributes.

Another issue with intervening on sensitive attributes is that, since many are “assigned at conception”, all observed covariates are post-treatment (Sen and Wasow, 2016) (as reflected in the design of our SCM in Fig. 0(d)). In statistical analysis, a frequent approach is to ignore all post-treatment variables to avoid introducing collider biases (Gelman et al., 2007; King et al., 1994). However, in our model, the purpose of the covariates is to deduce the true (unobserved) values of the latent for that individual. Therefore, when conditioning on the observed covariates, correlation of and is the objective, rather than an undesired side effect. This is the first step (“Abduction”) of computing counterfactuals (according to Pearl (2009)); we can think of this as adjusting for bias (of the sensitive attribute) in the -generating process.

4. Proposed Method

In this section we first conceptualize and describe our proposed causal model—depicted in Fig. 0(d)—then discuss the parameterization of the corresponding SCMs and learning procedure. A common causal modelling approach is to define a new SCM for each problem Pearl (2009)

, taking advantage of domain specific knowledge for that particular problem. This stands in contrast to a classic machine learning (ML) approach, which aims to process data and draw conclusions as generally as possible, by automatically discovering patterns of correlation in the data. While the causal modelling approach is capable of detecting effects the ML approach cannot, the ML approach is attractive since it provides modularity, generality and a more automated data processing pipeline. In this work, we aim to interpolate between the two approaches by considering a single, general causal model for observational data. Our model contains what we argue are a minimal set of fairly general causal variables for discovering treatment effects and biases in the data-generation process, allowing us to interface causally with arbitrary data that fits the proposed structure.

Two features of our causal model are noteworthy. First is the explicit consideration of the sensitive attribute—a potential source of dataset bias—as a confounder, which causally affects both the treatment and the outcome . This contrasts with approaches from outside the fairness literature (e.g. (Louizos et al., 2017), Fig. 0(b)), which in a fairness setting (Fig. 0(c)) would treat potential sensitive attributes as equivalent to other observed features. Our model accounts for the possibility that a sensitive attribute may have causal influence on the observed features, treatments and outcomes and the historical process which generated them. It makes the sensitive attribute distinct from the other attributes of , which we understand not as confounders but observed proxies. We can think of this as a causal modeling analogue of “fairness through awareness”. By actively adjusting for causal confounding effects of sensitive attributes, we can build a model which accounts for the interplay between the treatment and outcome for both values of the sensitive attribute.

The other noteworthy aspect of our model is the latent variable . Together, and make up all the confounding variables. We note two important points about these confounders. Firstly, we clarify that the model class we propose (a latent Gaussian and a deep neural network), is not necessarily the definitive model of the confounders of and ; however, it is a flexible one, with numerous applications in machine learning (Rezende et al., 2014). Secondly, we note that causal inference and machine learning have different conventions around unobserved (i.e. latent) variables — in causal inference, these variables are generally considered to be nameable objects in the world (e.g. SES, historical predjudice), whereas in machine learning they represent some unspecified (and perhaps abstract) structure in the data. Our follows the machine learning convention.

As in Louizos et al. (2017), represents all the unobserved confounding variables which effect the outcomes or treatments (other than ). The features can be seen as proxies (noisy observations) for the confounders (). Altogether, the endogenous variables in our model are , , , , and . We also have exogenous variables (not shown), each the immediate parent of (only) their respective endogenous variable. The structural equations are:

(5)

Since does not necessarily refer to tangible objects in the world, it is reasonable that in our model. This does not prevent a characteristic such as SES (which may be correlated with ) from being a confounder — rather, could represent the component of SES which is not based on . Since both confounders are inputs to all other variables in the SCM, the model can learn to represent variables which are based on , (e.g. SES) as a joint distribution of and .

With this SCM in hand, we can estimate various interventional outcomes, if we know the values of . For instance, we might estimate:

(6)

which are the expected values over outcomes of interventions on , and , and just , respectively.

However, the problem with the calculations in Eq. 6 is that is unobserved, so we cannot simply condition on its value. Rather, we observe some proxies . Since the structural equations go the other direction — is a function of , not the other way around — inferring from a given is a non-trivial matter.

In summary, we need to learn two things: a generative model which can approximate the structural functions , and an inference model which can approximate the distribution of given . Following the lead of Louizos et al. (2017), we use variational inference parametrized by deep neural networks to learn the parameters of both of these models jointly. In variational inference, we aim to learn an approximate distribution over the joint variables , by maximizing a variational lower bound on the log-probability of the observed data. As demonstrated in Louizos et al. (2017), the causal effects in the model become identifiable if we can learn this joint distribution. We extend their proof in Appendix B to show identifiability holds when including the sensitive attribute in the model (as in Fig. 0(d)).

We discuss here the identifiability condition from Louizos et al. (2017). Given some treatment and outcome , the classic “no hidden confounders” assumption asserts that the set of observed variables blocks all backdoor paths from to . Louizos et al. (2017) weaken this: they assume that there is a set of confounding variables such that blocks all backdoor paths from to , where are observed and are unobserved. They claim that if we recover the full joint distribtuion of , then we can identify the causal effect . However, this is only possible if we have sufficiently informative proxies . While recovering the full joint distribution does not mean we have to measure every confounder, we do have to at least measure some proxy for each confounder.

This is a weaker assumption, but not fully general. There may be confounding factors which cannot be inferred from the proxies — in this case, our model will be unable to learn the joint distribution, and the causal effect will be unidentifiable. In this case, we are back to square one; our causal estimates may be inaccurate. Determining the exact fairness implications of this remains an open problem — it would depend on which confounders were missing, and which proxies were already collected. A complicating factor is that testing for unconfoundedness is difficult, and usually requires making further assumptions (Tran et al., 2016). Therefore we might unintentionally make unfair inferences if we are unaware that we cannot infer all confounders. If we think this is the case, one solution is to collect more proxies. This provides an alternative motivation for the idea of increasing fairness by measuring additional variables (Chen et al., 2018).

To learn a generative model of the data which is faithful to the structural model defined in Eq. 4, we define distributions which will approximate various conditional probabilities in our model. We model the joint probability assuming the following factorization:

(7)

Each of these corresponds to an in Eq. 4 — formally, for an endogenous variable and subset of endogenous variables , where

. For simplicity, we choose computationally tractable probability distributions for each conditional probability in Eq.

7:

(8)

where are the dimensionalities of and respectively, and is the empirical marginal probability of across the dataset (if this is unknown, we could use a Beta prior over that distribution; in this paper we assume is observed for every example). For

, we use either a Bernoulli or a Gaussian distribution, depending on if

is binary or continuous:

(9)

To flexibly model the potentially complex and non-linear relationships in the true generative process, we specify several of the distribution parameters from Eqs. 8 and 9 as the output of a function , which is realized by a neural network (or TARNet (Shalit et al., 2017)) with parameters . We parametrize the model of with neural networks :

(10)

We use TARNets (Shalit et al., 2017) (see Sec. 2.3) to parameterize the distributions over and . In our model, acts as the “treatment” for the TARNet that outputs . Likewise and are joint treatments affecting — our model can be seen as a hierarchical TARNet, with one TARNet for each value of , where each TARNet has an arm for each value of . In all, this yields the following parametrization:

(11)

and the same for and ; where

is the sigmoid function

and are defined as in Sec. 2.3.

We further define an inference model , to determine the values of the latent variables given observed . This takes the form:

(12)

where the normal distribution is reparametrized analogously to Eq.

10 with networks . Since is always observed, we do not need to infer it, even though it is a confounder. We note that this is a different inference network from the one in Louizos et al. (2017) — we do not use the given treatments and outcomes in the inference model. We found it to be a simpler solution (no auxiliary networks necessary), and did not see a large change in performance. This is similar to the approach taken in Parbhoo et al. (2018).

To learn the parameters of this model, we can maximize the expected lower bound on the log probability of the data (the ELBO), which takes the form below, which we note is also a valid ELBO to optimize for lower-bounding the conditional log-probability of the treatments and outcomes given the data.

(13)

5. Related Work

Our work most closely relates to the Causal Effect Variational Autoencoder

(Louizos et al., 2017). Some follow-up work is done by Parbhoo et al. (2018), who suggest a purely discriminative approach using the information bottleneck. Our model differs from this work in that they did not include a sensitive attribute in their model, and their model does not contain a “reconstruction fidelity” term, in this case . Previous papers which learn causal effects using deep learning (with all confounders observed) include Shalit et al. (2017) and Johansson et al. (2016), who propose TARNets as well as some form of balancing penalty.

The intersection of fairness and causality has been explored recently. Counterfactual fairness — the idea that a fair classifier is one which doesn’t change its prediction under the counterfactual value of when is flipped — is a major theme (Kusner et al., 2017). Criteria for fairness in treatments are proposed in Nabi and Shpitser (2018), and fair interventions are further explored in Kusner et al. (2018). Zhang and Bareinboim (2018) present a decomposition which provides a different way of understanding of unfairness in a causal inference model. Other work focuses on the causal relationship between sensitive attributes and proxies in fair classification (Kilbertus et al., 2017).

Kallus and Zhou (2018) explore the idea of learning from biased data, making the point that a “fair” predictor learned on biased data may not be fair under certain forms of distributional shift, while not touching on causal ideas. Some conceptually similar work has looked at the “selective labels” problem (Lakkaraju et al., 2018; De-Arteaga et al., 2018), where only a biased selection of the data has labels available. There has also been related work on feedback loops in fairness, and the idea that past decisions can affect future ones, in the predictive policing (Lum and Isaac, 2016; Ensign et al., 2018) and recommender systems (Hashimoto et al., 2018) contexts, for example. Barabas et al. (2018) advocate for understanding many problems of fair prediction as ones of intervention instead. Another variational autoencoder-based fairness model is proposed in Louizos et al. (2016), but with the goal of fair representation learning, rather than causal modelling. Dwork et al. (2012) originated the term “fairness through awareness”, and argued that the sensitive attribute needed to be given a place of privilege in modelling in order to reduce unfairness of outcomes.

6. Experiments

In this section we compare various methods for causal effect estimation. The three effects we are interested in are

  • , the causal effect of on :

  • , the causal effect of on :

  • , the causal effect of on :

Note that all three effects are individual-level; that is, they are conditioned on some observed (and possibly ), and then averaged across the dataset.

6.1. Data

We evaluate our model using semi-synthetic data. The evaluation of causal models using non-synthetic data is challenging, since a random control trial on the intervention variable is required to validate correctness — this is doubly true in our case, where we are concerned with two different possible interventions. Additionally, while data from random control trials for treatment variables exists (albeit uncommon), conducting a random control trial for a sensitive attribute is usually impossible.

We have adapted the IHDP dataset (Multisite, 1990; Brooks-Gunn et al., 1994)—a standard semi-synthetic causal inference benchmark—for use in the setting of causal effect estimation under a sensitive attribute. The IHDP dataset is from a randomized experiment run by the Infant Health and Development Program (in the US), which ”targeted low-birth-weight, premature infants, and provided the treatment group with both intensive high-quality child care and home visits from a trained provider” (Hill, 2011)

. Pre-treatment variables were collected from both child (e.g. birth weight, sex) and the mother at time of birth (e.g. age, marital status) and behaviors engaged in during the pregnancy (e.g. smoked cigarettes, drank alcohol), as well as the site of the intervention (where the family resided). We choose our sensitive attribute to be mother’s race, binarized as White and non-White. We follow a similar method for generating outcomes to the Response B surface proposed in

Hill (2011). However, our setup differs since we are interested in additionally modelling a sensitive attribute and hidden confounders, so there are three more steps which must be taken. First, we need to generate outcomes for each example for under the counterfactual sensitive attribute . Second, we need to generate a treatment assignment for each example for the counterfactual value of the sensitive attribute. Finally, we need to remove some data from the observable measurements to act as a hidden confounder, as in Louizos et al. (2017).

We detail our full data generation method in Appendix A. We denote the outcome under interventions ) as . The subroutines in Algorithms 2 and 3 generate all factual and counterfactual outcomes and treatments for each example, one for each possible setting of and/or . Values of the constants that we use for data generation can be found in Appendix A.

We choose our hidden confounding feature to be birth weight. In the second (optional) step of data generation, we choose to remove 0, 1, or 2 other features. Especially if we choose features which are highly correlated with the hidden confounder, this has the effect of making the estimation problem more difficult. When removing 0 features, we do nothing. When removing 1 feature, we remove the feature which is most highly correlated with (head size). When removing 2 features, we remove two features most highly correlated with (head size & weeks born preterm).

6.2. Experimental Setup

We run four different models for comparison, including the one we propose. Since we are interested in estimating three different causal effects simultaneously (), we cannot compare against most standard causal inference benchmark models for treatment effect estimation. The models we test are the following:

  • Counterfactual MLP (CFMLP): a multilayer perception (MLP) which takes the treatment and sensitive attribute as input, concatenated to , and aims to predict outcome. Counterfactual outcomes are calculated by simply flipping the relevant attributes and re-inputting the modified vector to the MLP. A similar auxiliary network learns to predict the treatment from a vector of concatenated to .

  • Counterfactual Multiple MLP (CF4MLP): a set of four MLPs — one for each combination of . Examples are inputted into the appropriate MLP for the factual outcome, and simply inputted into another MLP for the appropriate counterfactual outcome. A similar pair of auxiliary networks predict treatment.

  • Causal Effect Variational Autoencoder with Sensitive Attribute (CVAE-A, Fig. 0(c)): a model similar to Louizos et al. (2017), but with the simpler inference model we propose. We incorporate a sensitive attribute by concatenating to as input; counterfactuals along are taken by flipping and re-inputting the modified vector. Counterfactuals along are taken as in Louizos et al. (2017).

  • Fair Causal Effect Variational Autoencoder (FCVAE, Fig. 0(d)): our proposed fair-aware causal model, with concatenated to as confounders. We run two versions: one where is used to help with reconstructing and inferring from (FCVAE-1), and one where it is not (FCVAE-2). Formally, the inference model and generative model of in FCVAE-1 are and , and in FCVAE-2 are and respectively. In both versions, is a confounder of both the treatment and the outcome.

The CFMLP is purely a classification baseline. It learns a mapping from input to output, estimating the conditional distribution . The CF4MLP shares this goal, but has a more complex architecture—it learns a disjoint set of parameters for each setting of interventions, allowing it to model completely separate generative processes. However, it is still ultimately concerned with supervised prediction. Furthermore, neither of these models is built to consider the impact of hidden confounders.

The CVAE-A is a model for causal inference of outcomes from treatments. Therefore, we should expect it to perform well in estimating . It is also created to model these effects under hidden confounders. Therefore, the difference between CVAE-A and the MLPs will tell us the improvement which comes from appropriate causal modelling rather than classification.

However, the CVAE-A does not consider the sensitive attribute as a confounder; rather, it treats it simply as another covariate of . So in comparing the FCVAE to the CVAE-A, we observe the improvement that comes from causally modelling the dataset unfairness stemming from a sensitive attribute. In comparing the FCVAE to the MLPs, we observe the full impact of the FCVAE — joint causal modelling of treatments, outcomes, sensitive attributes, and hidden confounders. See Appendix C for experimental details.

6.3. Results

6.3.1. Estimating Causal Effects

In this section, we evaluate how well the models from Sec. 6.2 can estimate the three causal effects . To avoid confusion with the words treatment and outcome, in each of these three causal interactions, we will refer to to the causing variable as the intervention variable, and the affected variable as the result variable. To evaluate how well our model can estimate causal effects, we use PEHE: Precision in Estimation of Heterogeneous Effects (Hill, 2011). This is calculated as: , where is the ground truth value of result from the intervention , and is our model’s estimate of that quantity. PEHE measures our ability to model both the factual (ground truth) and the counterfactual results.

Model A T T Y A Y
CFMLP 0.681 0.00 4.51 0.13 3.28 0.07
CF4MLP 0.667 0.00 4.58 0.13 3.71 0.09
CVAE-A 0.665 0.00 3.80 0.10 3.04 0.06
FCVAE-1 0.659 0.00 3.82 0.11 2.88 0.06
FCVAE-2 0.659 0.00 3.81 0.11 2.78 0.06
Table 1.

PEHE for each model on IHDP data (no extra features removed). Mean and standard errors shown, as calculated over 500 random seedings.

Model A T T Y A Y
CFMLP 0.675 0.00 4.30 0.11 3.42 0.08
CF4MLP 0.661 0.00 4.37 0.11 3.89 0.07
CVAE-A 0.672 0.00 4.05 0.10 3.53 0.07
FCVAE-1 0.663 0.00 4.00 0.10 3.39 0.08
FCVAE-2 0.663 0.00 3.99 0.10 3.25 0.07
Table 2. PEHE for each model on IHDP data (1 most informative feature removed). Mean and standard errors shown, as calculated over 500 random seedings.
Model A T T Y A Y
CFMLP 0.666 0.00 6.03 0.21 4.30 0.12
CF4MLP 0.659 0.00 5.77 0.18 4.59 0.10
CVAE-A 0.672 0.00 5.46 0.18 4.19 0.10
FCVAE-1 0.659 0.00 5.40 0.18 4.07 0.11
FCVAE-2 0.659 0.00 5.39 0.18 3.95 0.10
Table 3. PEHE for each model on IHDP data (2 most informative features removed). Mean and standard errors shown, as calculated over 500 random seedings. Lower is better.

In Tables 1-3, we show the PEHE for each of the models described in Sec. 6.2, for each causal effect of interest. Each table shows results for a version of the dataset with 0-2 of the most informative features removed (as measured by correlation with the hidden confounder). Therefore, the easiest problem is with zero features removed, the hardest is with two. Note that in IHDP, .

Generally, as expected, we observe that the causal models achieve lower PEHE for most estimation problems. Also as expected, we observe that that the PEHE for the more complex estimation problems () increases as the most useful proxies are removed from the data. We suspect there is less variation in the results for since it is a simpler problem: there are no extra confounders (other than ) or mediating factors to consider.

We find that our model (the FCVAE) compares favorably to the other models in this experiment. We see that in general, the fair-aware models (FCVAE-1 and FCVAE-2) have lower PEHE than all other models when estimating the causal effects relating to the sensitive attribute (). Furthermore, the FCVAE also performs similarly to the CVAE-A at estimation as well, demonstrating a slight improvement (at least in the more difficult 1, 2 features removed cases).

One interesting note is that FCVAE-1 (where is used in reconstruction of and in inference of ) and FCVAE-2 seem to perform similarly, with FCVAE-2 being slightly better, if anything. This may seem surprising at first, since one might imagine that using would allow the model to learn better representations of , particularly for the purpose of doing counterfactual inference across .

To explore this further, we examine in table 4 the latent representations learned by each model in terms of their encoder mutual information between and , which is calculated as , the KL-divergence from the encoder posterior to the prior. This quantity is roughly the same for both versions of the FCVAE, implying that the inference network does not leverage the additional information provided by in its latent code . This is in fact sensible because FCVAE has access to as an observed confounder in modeling the structural equations. We also noticed that CVAE contains about one bit of extra information in its latent code, implying some degree of success in capturing relevant information about in . But if CVAE models all confounders during inference, why does it underperform relative to FCVAE estimating the downstream causal effects, especially ? By making explicit the role of as confounder, we hypothesize that FCVAE can learn the interventional distributions with respect to (e.g., rather than the conditional distributions of CVAE (e.g., ; we suspect that the gating mechanism of the TARNet implementation of the structural equations to be important in this regard.

Model
CVAE-A 4.28 0.10
FCVAE-1 3.50 0.12
FCVAE-2 3.53 0.12
Table 4. KL divergence from the encoder posterior to prior after training on IHDP; equivalent to encoder mutual information (Alemi et al., 2018). CVAE and FCVAE-1 use as input to encoder, while FCVAE-2 uses only. Mean and standard errors shown, as calculated over 500 random seedings.

6.3.2. Learning a Treatment Policy

The next natural question is: how does estimating these causal effects contribute to a fair decision-making policy? We examine two dimensions of this. We define a policy as a function which maps inputs (features and sensitive attribute) to treatments. We suppose the goal is to assign treatments using a policy that maximizes its expected value , defined here as the expected outcome it achieves over the data, i.e. . For example, we could imagine the treatments to be various medications, and the outcome to be some health indicator (e.g. number of months survived post-treatment).

We can derive a policy from an outcome prediction model simply by outputting the predicted argmax value over treatments, i.e. , where is the model’s prediction of the true outcome . The optimal policy takes the argmax over ground truth outcomes every time.

First, we look at the mean regret of the policy , which is the difference between its achieved value and the the value of the optimal policy: . We note that in general, a policy’s regret is not easy to compute or bound without assumptions on the outcome distribution in the data. In Table 5, we display the expected regret values for the learned policies. We observe that the fair-aware model achieves lower regret than the unaware causal model, and much lower regret than the non-causal models, for both the easier and more difficult settings of the IHDP data.

Model 0 removed 1 removed 2 removed
CFMLP 0.37 0.02 0.42 0.02 0.81 0.04
CF4MLP 0.31 0.02 0.43 0.02 0.59 0.02
CVAE-A 0.21 0.01 0.38 0.01 0.59 0.02
FCVAE-1 0.19 0.01 0.36 0.01 0.55 0.02
FCVAE-2 0.19 0.01 0.35 0.01 0.55 0.02
Table 5. Regret for each model’s policy on IHDP data with 0, 1, or 2 of the most useful covariates removed. Mean and standard errors shown, as calculated over 500 random seedings. Lower regret is better.

Next, we attempt to measure the policy’s fairness. Most fairness metrics are designed for evaluating classification, not for intervention. However, Chen et al. (2018) explore an idea which is easily adjusted to the interventional setting: that an algorithm is unfair if it is much less accurate on one subgroup. Here, we adapt this notion to evaluate treatment policy fairness.

For any , let us say the policy is accurate if it chooses the treatment which in fact yields the best outcome for that individual; i.e. if . We can define the accuracy of the policy , where is an indicator function. We can define the subgroup accuracy as accuracy calculated while conditioning (not intervening) on a particular value of : . We condition rather than intervene on here since we are interested in measuring the impact of the policy on real, existing populations, rather than hypothetical ones. Finally, to evaluate the fairness of the policy, we can look at the accuracy gap: . If this is high, the model is more unfair, since the policy has been more successful at modelling one group than the other, and is much more consistently choosing the correct treatment for individuals in that group.

In Table 6 we display the accuracy gaps for our models and baselines on the IHDP dataset. We observe that the FCVAE achieves a smaller accuracy gap than those which do not consider the effect of the sensitive attribute. This is an encouraging sign that by understanding the confounding influence of sensitive attributes in biasing historical datasets, we can learn treatment policies which are more accurate for all subgroups of the data.

Model 0 removed 1 removed 2 removed
CFMLP 0.042 0.002 0.033 0.002 0.062 0.002
CF4MLP 0.034 0.002 0.038 0.002 0.054 0.002
CVAE-A 0.033 0.001 0.028 0.001 0.051 0.002
FCVAE-1 0.031 0.001 0.028 0.001 0.046 0.001
FCVAE-2 0.030 0.001 0.027 0.001 0.047 0.001
Table 6. Accuracy gap for each model’s policy on IHDP data with 0, 1, or 2 of the most useful covariates removed. Mean and standard errors shown, as calculated over 500 random seedings. Lower gap is more fair.

7. Discussion

In this paper, we proposed a causally-motivated model for learning from potentially biased data. We emphasize the importance of modeling the potential confounders of historical datasets: we model the sensitive attribute as an observed confounder contributing to dataset bias, and leverage deep latent variable models to approximately infer other hidden confounders.

In Sec. 6.3.2, we demonstrated how to use our model to learn a simple treatment policy from data which assigns treatments more accurately and fairly than several causal and non-causal baselines. Looking forward, the estimation of sensitive attribute causal effects suggests several compelling new research directions, which we non-exhaustively discuss here:

  • Counterfactual Fairness: Our model learns outcomes for counterfactual values of both and . This means we could choose to implement a policy where we assess everyone under the same value , by assigning treatments to all individuals, no matter their original value of , based on the inferred outcome distribution . Such a policy respects the definition of counterfactual fairness proposed by Kusner et al. (2017), which requires invariance to counterfactuals in at the individual level.

  • Path-Specific Effects: Our model allows us to decompose into direct and indirect effects through mediation analysis of (Robins and Greenland, 1992). By estimating this decomposition, we could learn a policy which respects path-specific fairness, as proposed by Nabi and Shpitser (2018).

  • Analyzing Historical Bias: Estimating causal effects between , , and permits for the analysis and comparison of bias in historical datasets. For instance, the effect is a measure of bias in a historical policy, and the effect is a measure of bias in whatever system historically generated the outcome. This could serve as the basis of a bias auditing technique for data scientists.

  • Data Augmentation: The absence of data (especially not-at-random) has strong implications for downstream modeling in both fairness (Kallus and Zhou, 2018) and causal inference (Rubin, 1976). Our model outputs counterfactual outcomes for both and , which could be used for

    fair missing data imputation

    (Van Buuren, 2018; Sterne et al., 2009). This could in turn enable the application of simpler methods like supervised learning to interventional problems.

  • Fair Policies Under Constraints: In this paper, we consider an approach to fairness where understanding dataset bias is paramount, rather than the more common fairness-accuracy constraint-based tradeoff (Hardt et al., 2016; Menon and Williamson, 2018). However, in some domains we may be interested in policies which satisfy a fairness constraint (e.g., the same distribution of treatments are given to each group). Estimating the underlying causal effects would be useful for constrained policy learning.

  • Incorporating Prior Knowledge: Graphical models (both probabilistic and SCM) permit the specification of prior knowledge when modeling data, and provide a framework for inference that balances these beliefs with evidence from the data. This is a powerful fairness idea—we may believe a priori that a dataset should look a certain way if not for some bias. In the context of a fair machine learning pipeline that considers many datasets, this relates to the AutoML task of learning distributions over datasets that share global parameters (Edwards and Storkey, 2017).

In automated decision making, the focus on intervention over classification (Barabas et al., 2018) suggests the more equitable deployment of machine learning when only biased data are available, but also raises significant technical challenges. We believe causal modeling to be an invaluable tool in addressing these challenges, and hope that this paper contributes to the discussion around how best to understand and make predictions from existing datasets without replicating existing biases.

References

  • (1)
  • Agarwal et al. (2018) Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. 2018. A Reductions Approach to Fair Classification. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 60–69.
  • Alemi et al. (2018) Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. 2018. Fixing a Broken ELBO. In International Conference on Machine Learning. 159–168.
  • Barabas et al. (2018) Chelsea Barabas, Madars Virza, Karthik Dinakar, Joichi Ito, and Jonathan Zittrain. 2018. Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research), Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, New York, NY, USA, 62–76.
  • Bechavod and Ligett (2017) Yahav Bechavod and Katrina Ligett. 2017. Penalizing Unfairness in Binary Classification. arXiv preprint arXiv:1707.00044 (2017).
  • Brooks-Gunn et al. (1994) J. Brooks-Gunn, F. Liaw, and P. Klebanov. 1994. Effects of Early Intervention on Cognitive Function of Low Birth Weight Preterm Infants,. Pediatric Physical Therapy 6, 1 (1994). https://doi.org/10.1097/00001577-199400610-00022
  • Chen et al. (2018) Irene Chen, Fredrik D Johansson, and David Sontag. 2018. Why Is My Classifier Discriminatory? In Advances in Neural Information Processing Systems 31.
  • Chouldechova (2017) Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153–163.
  • Clevert et al. (2015) Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). International Conference on Learning Representations (2015).
  • De-Arteaga et al. (2018) Maria De-Arteaga, Artur Dubrawski, and Alexandra Chouldechova. 2018. Learning under selective labels in the presence of expert consistency. arXiv preprint arXiv:1807.00905 (2018).
  • Dwork et al. (2012) Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. ACM, 214–226.
  • Edwards and Storkey (2017) Harrison Edwards and Amos Storkey. 2017. Towards a neural statistician. In International Conference on Learning Representations.
  • Ensign et al. (2018) Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubramanian. 2018. Runaway Feedback Loops in Predictive Policing. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research), Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, New York, NY, USA, 160–171. http://proceedings.mlr.press/v81/ensign18a.html
  • Friedler et al. (2016) Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2016. On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236 (2016).
  • Gelman et al. (2007) Andrew Gelman, Jeffrey Fagan, and Alex Kiss. 2007. An analysis of the New York City police department’s “stop-and-frisk” policy in the context of claims of racial bias. J. Amer. Statist. Assoc. 102, 479 (2007), 813–823.
  • Hardt et al. (2016) Moritz Hardt, Eric Price, Nati Srebro, et al. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems. 3315–3323.
  • Hashimoto et al. (2018) Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 1929–1938. http://proceedings.mlr.press/v80/hashimoto18a.html
  • Hill (2011) Jennifer L Hill. 2011. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics 20, 1 (2011), 217–240.
  • Holland (1986) Paul W Holland. 1986. Statistics and causal inference. Journal of the American statistical Association 81, 396 (1986), 945–960.
  • Johansson et al. (2016) Fredrik Johansson, Uri Shalit, and David Sontag. 2016. Learning representations for counterfactual inference. In International Conference on Machine Learning. 3020–3029.
  • Kallus and Zhou (2018) Nathan Kallus and Angela Zhou. 2018. Residual Unfairness in Fair Machine Learning from Prejudiced Data. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 2439–2448. http://proceedings.mlr.press/v80/kallus18a.html
  • Kilbertus et al. (2017) Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding Discrimination through Causal Reasoning. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 656–666. http://papers.nips.cc/paper/6668-avoiding-discrimination-through-causal-reasoning.pdf
  • Kim et al. (2018) Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, and Alexander Rush. 2018. Semi-Amortized Variational Autoencoders. In Proceedings of the 35th International Conference on Machine Learning.
  • King et al. (1994) Gary King, Robert O Keohane, and Sidney Verba. 1994. Designing social inquiry: Scientific inference in qualitative research. Princeton university press.
  • Kingma and Ba (2015) Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.
  • Kingma and Welling (2014) Diederik P Kingma and Max Welling. 2014. Auto-encoding variational bayes. International Conference on Learning Representations (2014).
  • Kucukelbir et al. (2017) Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. 2017. Automatic differentiation variational inference. The Journal of Machine Learning Research 18, 1 (2017), 430–474.
  • Kusner et al. (2017) Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems. 4066–4076.
  • Kusner et al. (2018) Matt J Kusner, Chris Russell, Joshua R Loftus, and Ricardo Silva. 2018. Causal Interventions for Fairness. arXiv preprint arXiv:1806.02380 (2018).
  • Lakkaraju et al. (2018) Himabindu Lakkaraju, Jon Kleinberg, Jure Leskovec, Jens Ludwig, and Mullainathanm Sendhil. 2018. The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables. In International Conference on Knowledge Discovery and Data Mining.
  • Louizos et al. (2017) Christos Louizos, Uri Shalit, Joris M Mooij, David Sontag, Richard Zemel, and Max Welling. 2017. Causal effect inference with deep latent-variable models. In Advances in Neural Information Processing Systems. 6446–6456.
  • Louizos et al. (2016) Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2016. The variational fair autoencoder. International Conference on Learning Representations (2016).
  • Lum and Isaac (2016) Kristian Lum and William Isaac. 2016. To predict and serve? Significance 13, 5 (2016), 14–19.
  • Marini and Singer (1988) Margaret Mooney Marini and Burton Singer. 1988. Causality in the social sciences. Sociological methodology 18 (1988), 347–409.
  • Menon and Williamson (2018) Aditya Krishna Menon and Robert C Williamson. 2018. The cost of fairness in binary classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research), Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, New York, NY, USA, 107–118. http://proceedings.mlr.press/v81/menon18a.html
  • Multisite (1990) A Multisite. 1990. Enhancing the Outcomes of Low-Birth-Weight, Premature Infants. JAMA 263 (1990), 3035–3042.
  • Nabi and Shpitser (2018) Razieh Nabi and Ilya Shpitser. 2018. Fair inference on outcomes. In

    Proceedings of the… AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

    , Vol. 2018. NIH Public Access, 1931.
  • Parbhoo et al. (2018) Sonali Parbhoo, Mario Wieser, and Volker Roth. 2018. Causal Deep Information Bottleneck. arXiv preprint arXiv:1807.02326 (2018).
  • Pearl (2009) Judea Pearl. 2009. Causality. Cambridge university press.
  • Rezende et al. (2014) Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014.

    Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In

    Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32 (ICML’14). JMLR.org, II–1278–II–1286.
    http://dl.acm.org/citation.cfm?id=3044805.3045035
  • Robins and Greenland (1992) James M Robins and Sander Greenland. 1992. Identifiability and exchangeability for direct and indirect effects. Epidemiology (1992), 143–155.
  • Rubin (1976) Donald B Rubin. 1976. Inference and missing data. Biometrika 63, 3 (1976), 581–592.
  • Rubin (2005) Donald B Rubin. 2005. Causal inference using potential outcomes: Design, modeling, decisions. J. Amer. Statist. Assoc. 100, 469 (2005), 322–331.
  • Sen and Wasow (2016) Maya Sen and Omar Wasow. 2016. Race as a bundle of sticks: Designs that estimate effects of seemingly immutable characteristics. Annual Review of Political Science 19 (2016), 499–522.
  • Shalit et al. (2017) Uri Shalit, Fredrik D. Johansson, and David Sontag. 2017. Estimating individual treatment effect: generalization bounds and algorithms. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 3076–3085. http://proceedings.mlr.press/v70/shalit17a.html
  • Sterne et al. (2009) Jonathan AC Sterne, Ian R White, John B Carlin, Michael Spratt, Patrick Royston, Michael G Kenward, Angela M Wood, and James R Carpenter. 2009. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. Bmj 338 (2009), b2393.
  • Tran et al. (2016) Dustin Tran, Francisco JR Ruiz, Susan Athey, and David M Blei. 2016. Model criticism for bayesian causal inference. arXiv preprint arXiv:1610.09037 (2016).
  • Van Buuren (2018) Stef Van Buuren. 2018. Flexible imputation of missing data. Chapman and Hall/CRC.
  • VanderWeele and Robinson (2014) Tyler J VanderWeele and Whitney R Robinson. 2014. On causal interpretation of race in regressions adjusting for confounding and mediating variables. Epidemiology (Cambridge, Mass.) 25, 4 (2014), 473.
  • Wainwright et al. (2008) Martin J Wainwright, Michael I Jordan, et al. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning 1, 1–2 (2008), 1–305.
  • Zafar et al. (2017) Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. 2017. Fairness Constraints: Mechanisms for Fair Classification. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 962–970.
  • Zhang and Bareinboim (2018) Junzhe Zhang and Elias Bareinboim. 2018. Fairness in Decision-Making–The Causal Explanation Formula. In 32nd AAAI Conference on Artificial Intelligence.

Appendix A Data Generation

  Step 1: Remove all children from the dataset with non-white mothers who received the original treatment (as in Hill (2011)).
  Step 2: Optional: Remove extra features from .
  Step 3: Normalize data (for each feature of

, subtract mean and divide by standard deviation).

  Step 4: Remove some features from the data to act as unobserved confounders.
  Step 5: Remove some feature from the data to act as the sensitive attribute.
  Step 6: Sample factual and counterfactual outcomes .
  Step 7: Sample factual and counterfactual treatments .
  Return
Algorithm 1 GenerateIHDP: Semi-synthetic Data Generation Algorithm for Fair Causal Inference
  Input: Features , unobserved confounders
  Let denote the horizontal concatenation of and , and let the offset matrix be the shape of with 0.5 in every position.
  Sample , choose .
  Sample
  Sample
  Sample
  Sample
  Return {}
Algorithm 2 GenerateOutcomes: Generate outcomes for each value of the treatment and sensitive attribute (style of (Hill, 2011), Resp. B)
  Input: Unobserved confounders
  Choose .
  Let
  Sample .
  Sample .
  Return
Algorithm 3 GenerateTreatments: Generate treatments for each value of the sensitive attribute

We detail our dataset generation progess in Algorithm 1. We denote the outcome under interventions ) as . The subroutines in Algorithms 2 and 3 generate all factual and counterfactual outcomes and treatments for each example, one for each possible setting of and/or . In Algorithm 1, we have several undefined constant variables. We use the following values for those variables:

  • :

    • for continuous variables,
       

    • for binary variables

    • for ,

    where selects values from according to the array of probabilities .

We also use the function , which is defined as:

(14)

Appendix B Identifiability of Causal Effects

Here we show that if we can successfully recover the joint distribution , we can recover all three treatment effects we are interested in:

  1. The effect of on ():

  2. The effect of on ():

  3. The effect of on ():

Our proof will closely follow Louizos et al. (2017). For each effects, it will suffice to show that we can recover the first term on the right-hand side of each expression. (The argument for the second term is the same). We will show only the proof for the effect of on — the others are very similar.

Theorem. Given the causal model in Fig. 0(d), if we recover the joint distribution , then we can recover .

Proof. We have that

(15)

By the -calculus, we can reduce further:

(16)

If we know the joint distribution , we can identify the value of each term in this expression; hence we can identify the value of the whole expression. ∎

Appendix C Experimental details

We run each model on 500 distinct data seed/model seed pairs, in order to get robust confidence estimates on the error of each model. We parametrize each function in our causal model with a neural network. Our networks between and have a single hidden layer of 20 hidden units. The size of the learned hidden confounder was 10 units. Each of our TARNets consist of a network outputting a shared representation, and two networks making predictions from that representation. Each of these network have 1 hidden layer with 100 hidden units. The size of the shared representation in the TARNets was 20 units. For simplicity, we set for all experiments (but not

)—this amounts to assuming unit variance for the data

, a sensible assumption because they are normalized during pre-processing. We used ELU non-linear activations (Clevert et al., 2015). We trained our model with ADAM (Kingma and Ba, 2015)

with a learning rate of 0.001, calculating the ELBO on a validation set and stopping training after 10 consecutive epochs without improvement. We sample 10 times from the posterior

at both training and test time for each input example. At training time we compute the average ELBO across the ten samples, while at test time we use the average prediction.