In this paper we try to bring together three different strands of research: causality, fairness, and opinion pooling. Causality deals with definition and the study of causal relationships; Pearl (2009)
provides a solid theoretical foundation for modelling causal structures. Fairness is concerned with guaranteeing that prediction systems deployed in sensitive scenarios support decisions that are fair from a social point of view; this topic is particularly relevant to the current research in machine learning, where models are often non-transparent and it is hard to evaluate whether their outputs are affected by discriminatory biases. Opinion pooling tackles the challenge of aggregating the opinions of several experts; when these opinions are expressed as probability distribution functions (pdf) the problem of pooling is expressed as the problem of merging multiple pdfs in a single distribution that can be used by a decision maker.
The pair-wise intersection of these fields has been the object of recent research. Kusner et al. (2017) and Kilbertus et al. (2017) analyzed how concepts from the domain of causality (counterfactuals, unresolved discrimination, proxy discrimination) may be used to assess fairness in the context of causal graphs. Bradley et al. (2014) and Alrajeh et al. (2018) offered a first attempt at extending the problem of opinion pooling over causal models and discussed how to aggregate potentially incompatible causal graphs.
However, little research has been done so far on the problem of aggregating causal models given a requirement of fairness. Russell et al. (2017)
propose that the problem of learning a classifier under multiple fairness constraints derived from causal models provided by different experts can be recast as an optimization problem through an-relaxation in the definition of fairness.
In this paper we offer a first exploration of the issue of combining expressive and realistic models under the requirement of fairness using the conceptual framework of opinion pooling. In particular, building upon the work on aggregating causal judgments in Dietrich et al. (2016), we analyze the case in which potentially unfair models must be pooled and we show how results from the work on counterfactual fairness and causal judgment aggregation may be used to generate aggregated fair models. We design two complementary algorithms for producing aggregated fair models and we explore their properties.
This section reviews basic notions used to define the problem of aggregating probabilistic causal models under fairness: Section 2.1 recalls the primary definitions in the study of causality; Section 2.2 discusses the notion of fairness in machine learning; Section 2.3 offers a formalization of the problem of opinion pooling.
We define a causal model as a triple where:
is a set of exogenous variables representing background factors that are not affected by other variables in the model;
is a set of endogenous variables representing factors that are affected by other variables in the model;
is a set of structural equations such that the value of variable is , where are the values assumed by the variables in the set and are the values assumed by the variables in the set ; for each endogenous variable , the structural equation defines the value as a function of a subset of variables in .
The causal diagram associated with the causal model is the directed graph where:
is the set of vertices representing the variables in ;
is the set of edges determined by the structural equations in ; edges are connecting each endogenous node with the set of its parent nodes ; we denote the edge going from to .
Assuming, by intuition, the acyclicity of causality, we will take that a causal model entails an acyclic causal diagram represented as a directed acyclic graph (DAG).
Given a causal model , we define context as a specific instantiation of the exogenous variables, .
Given an endogenous variable , we will use the shorthand notation to denote the value of the variable , given by propagating the context through the causal diagram according to the structural equations.
Given a causal model , we define intervention as the substitution of the structural equation with the value .
Given two endogenous variables and , we will use the shorthand notation to denote the value of the variable under the intervention .
Notice that, from the point of view of the causal diagram, performing the intervention is equivalent to setting the value of the variable to and removing all the incoming edges in .
Given a causal model , the context , two endogenous variables and , and the intervention , we define counterfactual the value of the expression .
Conceptually, the counterfactual represents the value that would have taken in context had the value of been .
Probabilistic Causal Model
A probabilistic causal model is a tuple where:
is a causal model;
is a probability distribution over the exogenous variables. The probability distribution , combined with the dependence of each endogenous variable on the exogenous variables, as specified in the structural equation , allows us to define a probability distribution over the endogenous variables as: .
Notice that we overload the notation to denote both (generic) causal models and probabilistic causal models; the context will allow the reader to distinguish between them.
Following the work of Kusner et al. (2017), we review the topic of fairness, with a particular emphasis on counterfactual fairness for predictive models.
Fairness and Learned Models
The study of fairness is concerned with the societal impact of the adoption of machine learning models at large. When training machine learning systems on historical real-world data, the use of black-box models in sensitive contexts such as judicial sentencing or educational grants allocation, carries the risk of perpetuating, or even introducing (Kusner et al., 2017), socially or morally unacceptable discriminatory biases (for a survey, see, for instance, Zliobaite, 2015). Thus, fairness requires the definition of new metrics that take into account not only the performance of a machine learning system, but also its impact on actual and delicate situations in the real world.
Fairness of a Predictor
A predictive model can be represented as a (potentially probabilistic) function of the form , where is a predictor and
is a vector of covariates. According to an observational approach to fairness, the set of covariates is partitioned in a set ofprotected attributes , representing discriminatory elements of information, and a set of features , carrying no overt sensitive information. The predictive model is then redefined as and the fairness problem is expressed as the problem of learning a predictor that does not discriminate with respect to the protected attributes . Now, given the complexities of social reality and disagreement over what constitutes a fair policy, different measures and principles of fairness may be adopted to rule out discrimination. For instance, fairness through unawareness is defined by requiring the predictor not to use protected attributes in its decision making; for a more thorough review of different types of fairness and their limitations, see Gajane (2017) and Kusner et al. (2017).
Fairness Over Causal Models
Given a probabilistic causal model fairness may be evaluated following an observational approach. Let us take to be an endogenous variable whose structural equation is a predictive function; let us also partition the remaining endogenous variables into a set of protected attributes and a set of features . Then, a fair predictor is a function of the endogenous nodes, , that respects a given definition of fairness.
Given a probabilistic causal model , a predictor , and a partition of the endogenous variables into , the predictor is counterfactually fair if, for every context , , for all values of the predictor, for all values in the domain of , and for all in the domain of (Kusner et al., 2017).
In other words, the predictor is counterfactually fair if, under all the contexts, the prediction given the protected attributes and the features would not change if we were to intervene to force the value of the protected attributes to , for all the possible values that the protected attribute can assume.
Denoting the descendants of the nodes in in the model , an immediate property follows from the definition of counterfactual fairness:
(Lemma 1 in Kusner et al., 2017) Given a probabilistic causal model , a predictor and a partition of the endogenous variables , the predictor is counterfactually fair if is a function only of variables that are not in .
2.3 Opinion Pooling
Following the study of Dietrich et al. (2016), we introduce the framework for opinion pooling.
Assume there are experts, each one expressing his/her opinion , . The problem of pooling (or aggregating) the opinions consists in finding a single pooled opinion representing the merging of all the individual opinions.
Probabilistic Opinion Pooling
Given opinions in the form of probability distributions , probabilistic opinion pooling means finding a single pooled probabilistic opinion of the form , where (Dietrich et al., 2016).
Now, given a set probabilistic opinions , different functions may be chosen to perform opinion pooling, such as arithmetic averaging or geometric averaging. A grounded approach to choosing a function is based on the axiomatic approach, that is, on the definition of a set of properties that the pooling function is required to satisfy (Dietrich et al., 2016). For instance, it can be shown that the only pooling function satisfying a property of event-wise independence (i.e., the value of depends only on the probability values assigned to by the experts) and unanimity-preservation (i.e.: if all the experts hold the same opinion , then ) is the weighted linear pooling function , , , where is a weight assigned to the experts (Aczél and Wagner, 1980); for a more in-depth review of different types of probabilistic opinion pooling functions and their properties, see Dietrich et al. (2016).
Given opinions in the form of judgments, that is, true-or-false assignments , judgement pooling is concerned with defining a single pooled judgement of the form , where (Grossi and Pigozzi, 2014).
As in the case of probabilistic opinion pooling, given a set of judgments , different functions may be chosen to perform judgment pooling, such as majority voting or intersection. Again a grounded approach for choosing a function is an axiomatic approach (Bradley et al., 2014).
Aggregation of Causal Models
Bradley et al. (2014) offer a seminal study of the problem of aggregating probabilistic causal models expressed as Bayesian networks, that is structured representations of the joint probability distribution of a set of variables
and their conditional dependencies in the form of a DAG with associated conditional probability distributions(Pearl, 2014).
First, they show that a naive probabilistic opinion pooling over the conditional probabilities in the models is unable to preserve the basic property of conditional independence encoded in a Bayesian network, that is, even if for all the experts , it can not be guaranteed that .
They then suggest a two-stage approach to the problem of aggregating the probabilistic causal models into a pooled probabilistic causal model . In the first qualitative step they reduce the problem of finding the graph associated with the pooled model to the problem of judgment aggregation over the edges in all the models ; for all the possible edges in all the models , they take the presence of the edge from node to node in model as the -th expert casting the judgment , and the absence of it as the judgment ; the problem of defining the diagram is then tackled as a judgment aggregation problem over each edge . In the second quantitative step, they reconstruct the joint probability of the pooled model solving a problem of probabilistic opinion pooling over the conditional distributions defining the joint of .
A central result in the analysis of Bradley et al. (2014) is the following impossibility result:
(Theorem 1 in Bradley et al., 2014) Given a set of probabilistic causal models , if the set of vertices contains three or more variables, then there is no judgment aggregation rule that satisfies all the following properties:
Universal Domain: the domain of includes all logically possible acyclic causal relations;
Acyclicity: the pooled causal model produced by is guaranteed to be acyclic;
Unbiasedness: given two vertices , the causal dependence of on in rests only on whether is causally dependent on in the individual , and the aggregation rule is not biased towards a positive or negative outcome;
Non-dictatorship: the pooled causal model produced by is not trivially the causal model held by a given expert .
As a result of this theorem, no unique aggregation rule can be chosen for the pooling of causal judgments in the first step of the two-stage approach. A relaxation of these properties must be decided depending on the scenario at hand.
3 Aggregation of Causal Models Under Fairness
This section analyzes how probabilistic causal models can be aggregated under fairness: Section 3.1 provides a formalization of our problem; Section 3.2 presents two complementary algorithms for performing aggregation of probabilistic causal models under counterfactual fairness; finally, Section 3.3 offers an illustration of the use of our algorithms on a toy case study.
3.1 Problem Formalization
We now consider the case in which experts are required to provide a probabilistic causal model representing a potentially sensitive scenario. We will make the simplifying assumption that the experts will provide the model over the same variables , so that the probabilistic causal model takes the form , where are the nodes of the graph , the probability distributions at the root nodes of the graph, and the set of structural equations defining the behaviour of the remaining nodes. To simplify the task of the expert, we will also take that the pdfs and the functional form of the predictor are pre-defined; in particular,
is assumed to be a given algorithm (such as a neural network), while the inputsof this algorithm must be specified by the experts.
To sum up, the experts are required to specify a probabilistic causal model and to define which variables in the model are to be used in the predictor .
We also assume that the models provided by the experts are not necessarily fair, at least not in terms of counterfactual fairness as defined in Section 2.2. We consider this assumption very reasonable as individual experts may not be aware of the potential for discrimination in their models or may not know how to formally guarantee fairness. The final decision maker, though, is aware of fairness implications and wants to generate an aggregated predictive probabilistic causal model that guarantees counterfactual fairness. As such, the decision maker is responsible for specifying the partitioning of the endogenous variables into ; in other words, he/she is in charge of defining which variables are sensitive.
In conclusion, our problem may be expressed as: given potentially counterfactually unfair probabilistic causal models , a predictor , and a partition of the variables into , is there a pooling function over causal models such that is guaranteed to respect counterfactual fairness: ?
To tackle our problem we adopt the two-stage approach for the aggregation of probabilistic causal models discussed in Section 2.3. Our solution focuses on the first step of this method: we modify the qualitative step in order to generate an aggregated graph that guarantees counterfactual fairness, while we do not discuss the second quantitative step in which the structural equations are pooled.
There are two challenges in our approach: (i) we need the predictor in the aggregated probabilistic causal model to be indeed counterfactually fair; and (ii) we need to specify a relaxation of one of the intuitive requirements detailed in Theorem 1. The first challenge can be addressed by satisfying the condition in Lemma 1; to meet the second challenge we argue that the existence of a predictor in the graph suggests the possibility of dropping the property of unbiasedness and provides a natural starting point for imposing an ordering on the edges of .
Concretely, we tackle the two challenges above in the following two algorithmic steps:
Removal step: remove all the protected attributes and their descendants from the aggregated model ;
Pooling step: perform judgment aggregation after ordering the edges in each graph according to the distance from the predictor .
The removal step enforces Lemma 1 and thus guarantees counterfactual fairness. The pooling step performs judgment aggregation relying on an ordering of the edges in each as a function of the distance from the predictor . This ordering is not total, and we may still need a rule to break ties (e.g., random selection or an alphabetical criterion). Progressing according to this order, we can select edges and perform judgment aggregation using a concrete rule (e.g., majority or intersection). New edges are then added to the graph of the pooled model , as far as acyclicity is not violated (Bradley et al., 2014). Now, according to the order of these two steps, two different algorithms arise.
Algorithm 1 reports the removal-pooling algorithm, in which removal is performed first (lines 3-10) and then pooling (lines 12-28). Notice that this algorithm may have a high likelihood of producing an empty set of edges for the aggregated probabilistic causal model . This is due to the fact that in the removal stage we remove all the nodes that are descendant of the protected attributes in any probabilistic causal model . This reflects a very prudent approach, in which even a single expert relating a protected attribute to a node is sufficient to remove all the nodes along paths starting in and going through . This minimizes the risk of introducing in the final pooled probabilistic causal models variables that are potentially discriminatory and that were identified only by a single expert. On the other hand, the drawback of this approach is that few spurious connections from a protected attribute added by a unreliable expert may lead to an empty set of edges .
Algorithm 2 reports the pooling-removal algorithm, in which pooling is performed first (lines 3-19) and then removal (lines 21-23). Differently from the previous algorithm, this solution is less likely to end with an empty set of edges for the aggregated probabilistic causal model . This is due to the fact that edges are first pooled and spurious connections introduced by unreliable experts are filtered out. This reflects a more compromise-based approach, where some form of agreement (as defined by the concrete rule for judgment aggregation) is required to assert the causal influence of a protected attribute on a node .
Here we give a simple illustration of the problem of causal model aggregation under counterfactual fairness and we point out the effect and the differences between the removal-pooling algorithm and the pooling-removal algorithm.
The head of the Department of Computer Science has decided to develop a predictive model to help with the selection of PhD candidates. To do so, he/she has chosen to build a predictive model implemented as a neural network. In order to decide which features to use in , Prof. Alice and Prof. Bob had been asked to define a causal model for this selection problem. Alice and Bob are both provided with the resumes of the candidates, from which they extract the following variables: age (Age), gender (Gnd), MSc university department (Dpt), MSc final mark (Mrk), years of job experience in Computer Science (Job), quality of the cover letter (Cvr), and the predictor (). (For simplicity, we leave implicit the presence of an exogenous variable for each endogenous node).
Figures 1 and 2 illustrate and , respectively, that is the diagrams associated with the causal model defined by Alice and Bob. The two graphs are very similar. Understandably, both agree that the decision on whether to admit a candidate or not should depend on his/her work experience in computer science, the department where he/she got his/her MSc degree, the MSc final mark, and the quality of the cover letter; they also agree that the amount of years of job experience in computer science is causally influenced by the age of the candidate and, since they both read reports on the gender gap in the computer science industry, they also think that gender affects the opportunity of the candidate of having a job.
On the other hand, the two models exhibit some differences. Specifically, after skimming through the study on gender bias in admissions at Berkley (Bickel et al., 1975; Pearl, 2009), Alice concludes that gender causally affects the choice of department where the candidate did his/her studies. Also, differently from Bob, she decides to exclude age as an input to the predictor.
The complete probabilistic causal model would require Alice and Bob to specify structural equations on the nodes, as well. However, since our algorithms work on the graphs only at a qualitative level, we omit the definition of these equations.
Notice that in their modeling Alice and Bob did not concern themselves with the issue of discrimination. Also notice that, from a purely formal point of view, they could have made the predictor depend on all the available variables; however, it seems more reasonable for a modeler to consider only those variables that are expected to affect the predictor.
Now, the head of the department decides to aggregate the two models taking into account the policies for fairness approved by the University. Gender is marked as the only protected attribute, thus giving rise to the following partition of the endogenous nodes: , . As a concrete judgment aggregation rule, the strict majority rule is adopted, thus preserving edges in the pooled graph only when both Alice and Bob agree on the existence of a given edge.
Suppose now that the head of the department decides to use the removal-pooling algorithm. In the first step of the algorithm, all the protected attributes and their descendants are removed. In Alice’s model, are removed, while in Bob’s model only are removed. At the end, we are left with sub-models of and illustrated in Figure 3. In the second step of the algorithm, the remaining nodes are ordered according to their distance from the predictor and pooled using the strict majority aggregation rule. This aggregation produce a minimal graph with the single feature . The aggregated predictor, in order to be fair, is just , meaning that the decision should only be based on the quality of the cover letter.
Let us suppose now that the head of the department decides to use the pooling-removal algorithm. In the first step of the algorithm, edges in and are ordered according to the distance from the predictor (using an alphabetical criterion for resolution of ties):
Following this ordering, edges are aggregated using the strict majority rule, giving rise to the pooled (not counterfactually fair) model in Figure 4. In the second step of the algorithm, protected attributes and their descendants are removed from the pooled model ; that is, we remove . The final counterfactually fair predictor is , meaning that decisions can be taken on the basis of the department of the candidate, his/her final mark and his/her cover letter.
This example clearly illustrates the different effects of the two proposed algorithms. In the removal-pooling algorithm all causal connections suggested by the experts spread discrimination; the decision of a single expert (such as Alice adding a causal edge between gender and department) is sufficient to remove a whole set of nodes from the final aggregated model . On the contrary, in the pooling-removal algorithm models are first aggregated, thus removing causal edges that are not widely supported (such as the edge between gender and department filtered out by the majority pooling), and then sensitive nodes are removed. Notice that, for both algorithms, the qualitative generation of the pooled model might be followed by the quantitative step of causal model aggregation in which the structural equations of the pooled model are computed using a pooling function (Bradley et al., 2014).
This paper offers a first exploration of the problem of performing aggregation of causal models under the requirement of counterfactual fairness. In particular, we explored how the two-stage approach for casual model aggregation may be adapted in its first stage to take into account the requirement of counterfactual fairness. We presented two simple algorithms, built around the idea of removal and pooling, to solve the problem of aggregation and we showed how they can lead to different, yet reasonable, solutions.
Working with causal graphs and aggregating models produced by different experts while, at the same, respecting a principle of counterfactual fairness, constitutes a relevant problem in the field of machine learning and decision making. This work may be seen as a starting point for further research and we suggest some avenues for future development that we are investigating:
Algorithms for aggregation may be refined; the current algorithms take an extremely safe stance and discard a lot of information in order to guarantee counterfactual fairness. More subtle algorithms, working both on the first and second stage of causal model aggregation may be developed.
The definition of protected attributes may be enriched with the introduction of resolving variables (i.e., variables that stop the propagation of discrimination from the protected attributes) and proxy variables (i.e., variables that propagate discrimination from the protected attributes) (Kilbertus et al., 2017).
Interesting information for compensating unfair biases may be extracted from the difference between the models provided by the experts and the fair aggregated model of the decision maker. Indeed, experts are likely to provide descriptive models of the dynamics of a given system, while the decision maker is interested in coming up with a prescriptive model of ideal behavior. The distance between the experts’ models and the decision maker’s may provide a measure of how close a social system is to behaving in accordance with a principle of fairness, and the specific differences may highlight variables on which to act to reduce discrimination.
Finally, a different scenario may be considered, in which the experts are actually aware of fairness issues and provide fair models. From the point of view of the decision maker, it would be interesting to prove formally if there is an aggregation function which preserves fairness.
- Aczél and Wagner  J Aczél and C Wagner. A characterization of weighted arithmetic means. SIAM Journal on Algebraic Discrete Methods, 1(3):259–260, 1980.
- Alrajeh et al.  Dalal Alrajeh, Hana Chockler, and Joseph Y Halpern. Combining experts’ causal judgments. 2018.
- Bickel et al.  Peter J Bickel, Eugene A Hammel, and J William O’Connell. Sex bias in graduate admissions: Data from berkeley. Science, 187(4175):398–404, 1975.
- Bradley et al.  Richard Bradley, Franz Dietrich, and Christian List. Aggregating causal judgments. Philosophy of Science, 81(4):491–515, 2014.
- Dietrich et al.  Franz Dietrich, Christian List, A Hájek, and C Hitchcock. Probabilistic opinion pooling. Oxford Handbook of Probability and Philosophy, Oxford University Press, Oxford, 2016.
- Gajane  Pratik Gajane. On formalizing fairness in prediction with machine learning. arXiv preprint arXiv:1710.03184, 2017.
Grossi and Pigozzi 
Davide Grossi and Gabriella Pigozzi.
Judgment aggregation: a primer.
Synthesis Lectures on Artificial Intelligence and Machine Learning, 8(2):1–151, 2014.
- Kilbertus et al.  Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, pages 656–666, 2017.
- Kusner et al.  Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pages 4069–4079, 2017.
- Pearl  Judea Pearl. Causality. Cambridge university press, 2009.
- Pearl  Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Elsevier, 2014.
- Russell et al.  Chris Russell, Matt J Kusner, Joshua Loftus, and Ricardo Silva. When worlds collide: integrating different counterfactual assumptions in fairness. In Advances in Neural Information Processing Systems, pages 6417–6426, 2017.
- Zliobaite  Indre Zliobaite. A survey on measuring indirect discrimination in machine learning. arXiv preprint arXiv:1511.00148, 2015.