Abstract
Statistical algorithms are usually helping in making decisions in many aspects of our lives. But, how do we know if these algorithms are biased and commit unfair discrimination of a particular group of people, typically a minority? Fairness is generally studied in a probabilistic framework where it is assumed that there exists a protected variable, whose use as an input of the algorithm may imply discrimination. There are different definitions of Fairness in the literature. In this paper we focus on two of them which are called Disparate Impact (DI) and Balanced Error Rate (BER). Both are based on the outcome of the algorithm across the different groups determined by the protected variable. The relationship between these two notions is also studied. The goals of this paper are to detect when a binary classification rule lacks fairness and to try to fight against the potential discrimination attributable to it. This can be done by modifying either the classifiers or the data itself. Our work falls into the second category and modifies the input data using optimal transport theory.
Keywords :
Fairness in Machine Learning, Optimal Transport, Wasserstein barycenter.
1 Introduction
Along the last decade, Machine Learning methods have become more popular to build decision algorithms. Originally meant for recommendation algorithms over the Internet, they are now widely used in a large number of very sensitive areas such as medicine, human ressources with hiring policies, banks and insurance (lending), police, and justice with criminal sentencing, see for instance in [BHJ17] [PRT12] or [FSV18] and references therein. The decisions made by what is now referred to as IA have a growing impact on human’s life. The whole machinery of these technics relies on the fact that a decision rule can be learnt by looking at a set of labeled examples called the learning sample and then this decision will be applied for the whole population which is assumed to follow the same underlying distribution. So the decision is highly influenced by the choice of the learning set.
In some cases, this learning sample may present some bias or discrimination that could possibly be learnt by the algorithm and then propagated to the entire population by automatic decisions and, even worse, providing a mathematical legitimacy for this unfair treatment. When giving algorithms the power to make automatic decisions, the danger may come that the reality may be shaped according to their prediction, thus reinforcing their beliefs in the model which is learnt. Classification algorithms are one particular focus of fairness concerns since classifiers map individuals to outcomes. Hence, achieving fair treatment is one of the growing fields of interest in Machine Learning. We refer for instance to [ZVGRG17] or [FSV18] for a recent survey on this topic. For this, several definitions of fairness have been considered. In this paper we focus on the notion of disparate impact for protected variables introduced in [FFM15]. Actually, some variables, such as sex, age or ethnic origin, are potentially sources of unfair treatment since they enable to create information that should not be processed out by the algorithm. Such variables are called in the literature protected variables. An algorithm is called fair with respect to these attributes when its outcome does not allow to make inference on the information they convey. Of course the naive solution of ignoring these attributes when learning the classifier does not ensure this, since the protected variables may be closely correlated with other features enabling a classifier to reconstruct them.
Two solutions have been considered in the Machine Learning literature. The first one consists in changing the classifier in order to make it not correlated to the protected attribute. We refer for instance to [ZVGRG17], [BL17] or [DOB18] and references therein. Yet changing the way a model is built or explaining how the classifier is chosen may be seen too intrusive for many companies or some may not be able to change the way they build the model. Hence a second solution consists in changing the input data so that predictability of the protected attribute is impossible. The data will be blurred in order to obtain a fair treatment of the protected class. This point of view has been proposed in [FFM15], [JL17] or [HW17] for instance. In the following, we first provide a statistical analysis of the Disparate Impact definition and recast some of the ideas developed in [FFM15] to stress the links between fairness, predictability and the distance between the distributions of the variables given the protected attribute. Then we provide some theoretical justifications of the methodology proposed by previous authors for one dimensional data to blur the data using the barycenter of the conditional distribution with respect to the Wasserstein distance. These methods are called either full or partial repair. We extend this reparation procedure to the case of multidimensional data and provide a feasible algorithm to achieve this fairness reparation using the notion of Wasserstein barycenter. Finally, we propose another methodology called Random Repair to transform the data in order to achieve a tradeoff between a minimal loss of information with respect to the classification task and still a certain level of fairness for classification procedures that could be used with this transformed data. Applications to real data enable to study the efficiency of previous procedures.
The paper falls into the following parts. Section 2 presents the relationships between the notions of Disparate Impact, the predictability of a protected attribute and distance between the distributions conditionally to this attribute. Section 3 is devoted to a probabilistic framework to transform the data to obtain fair classifiers. The following section, Section 4, provides some insight to understand the use of the Wasserstein’s barycenter and its limitation. Applications to a real data set are shown in Section 5, while the proofs are postponed to the Appendix.
2 Fairness using Disparate Impact assessment
Consider the probability space
, with the Borel algebra of subsets of and. In this paper, we tackle the problem of forecasting a binary variable
, using observed covariates We assume moreover that the population can be divided into two categories that represent a bias, modeled by a variable . This variable is called the protected attribute, which takes the values for the “minority”class and supposed to be the unfavored class; and for the “default”, and usually favored class. We also introduce also a notion of positive prediction in the sense that represents a success while is a failure.Hence the classification problem aims at predicting a success from the variables , using a family of binary classifiers . For every , the outcome of the classification will be the prediction . We refer for instance to [BBL04] for a complete description of classification problems in statistical learning. In this framework, discrimination or unfairness of the classification procedures, appears as soon as the prediction and the protected attribute are too closely related, in the sense that statistical inference on may lead to learn the distribution of the protected attribute . This issue has received lots of interest among the last years and several ways to quantify this discrimination bias have been given. We highlight two of them, whose interest depends on the particular problem. More precisely, we can deal with two situations, depending whether the true distribution of the label is available. If it is known, the definition introduced in [BHJ17], defines that a classifier achieves Overall Accuracy Equality
, with respect to the joint distribution of
, if(2.1) 
This entails that the probability of a correct classification is the same across groups and, hence, the classification error is independent of the group. This idea can be also found in the [ZVGRG17] as the condition of having Disparate Mistreatment, which happens when the probability of error is different for each group as in (2.1).
Nevertheless, in many problems, the true is not available (this data may be very sensitive and the owner of the data may not want to make it available), or the classification methodology can not be changed, so the study of fairness must be based on the outcome . In this situation, following [FFM15] or [BHJ17], a classifier is said to achieve Statistical Parity, with respect to the joint distribution of , if
(2.2) 
This means that the probability of a successful outcome is the same across the groups. For instance, if we consider that the protected variable represents gender, the value would be assigned to “female”and to “male”, we would say that the algorithm used by a company achieves Statistical Parity if a man and a woman have the same probability of success (for instance being hired or promoted).
We will use the following notations
In the rest of the paper, we consider classifiers such that and , which means that the classifier is not totally fair or unfair in the sense that the classifier does not predict the same outcome for a whole population according to the protected attribute.
The independence described in (2.2) is difficult to achieve and may not exist in real data. Therefore, to assess this kind of fairness, an index called Disparate Impact of the classifier with respect to , has been introduced in [FFM15] as
(2.3) 
The ideal scenario where achieves Statistical Parity is equivalent to . Statistical Parity is often unrealistic, so we will relax it into achieving a certain level of fairness as described in the following definition.
Definition 2.1.
The classifier has Disparate Impact at level , with respect to , if .
Note the Disparate Impact of a classifier measures its level of fairness: the smaller the value of , the less fair it is. The classification rules considered in this framework are such that because we are assuming that the default class is more likely to have a successful outcome. Thus, in the definition, the level of fairness takes values . We point out that the value , which is also known in the literature as the rule has been cited as a legal score to decide whether the discrimination of the algorithm is acceptable or not (see for instance [FFM15]). This rule can be explained as “for every 5 individuals with successful outcome in the majority class, 4 in the minority class will have a successful outcome too”.
In what follows, to promote fairness, it will be useful to state the definition in the reverse sense. A classifier does not have Disparate Impact at level , with respect to , if . Finally, another definition has been proposed in the statistical literature on fair learning. Given a classifier
, its Balanced Error Rate (BER) with respect to the joint distribution of the random vector
is defined as the average classconditional error(2.4) 
Notice that is the general misclassification error of in the particular case when we have , which consists in the ideal situation when both protected classes have the same probability of occurence. This quantity enables to define the notion of predictability of the protected attribute. is said to be predictable from if there exists a classifier such that
Equivalently, is said not to be predictable from if , for all classifiers chosen in the class . Thus, if the minimum of this quantity is achieved by a classifier ,
then it is clear that is not predictable from for all .In the following, we recast previous notions of fairness and provide a probabilistic framework to highlight the relationships between the distribution of the observations and the fairness of the classification problem. The following theorem generalizes the result in [FFM15], showing the relationship between predictability and Disparate impact.
Theorem 2.1.
Given random variables
, the classifier has Disparate Impact at level , with respect to , if and only if .The following theorem establishes the relationship between the minimum Balance Error Rate and distance in Total Variation between the two conditional distributions and .
Theorem 2.2.
Given the variables and ,
This result shows that fairness expressed through the notion of Disparate Impact depends highly on the conditional distributions of the variables X conditionally to the protected attribute, and .
Actually, Theorem 2.2 implies that is not predictable from if, and only if,
(2.5) 
and, as a consequence of Theorem 2.1, for all
Hence, the smaller the Total Variation distance, the greater is the value that we could find satisfying Equation (2.5) and thus, the less predictable using the variables will be . The best case happens when , which is equivalent to the equality of both conditional distributions . In this situation, and are independent random variables, and we will have that is not predictable from , for all , and . Note that clearly non predictability is the best that can be achieved.
3 Removing disparate impact using Optimal Transport
3.1 A probabilistic model for data repair
Some classification procedures exhibit a discrimination bias quantified through a potential Disparate Impact in the classification outcome , with respect to the joint distribution of . To get rid of the possible discrimination associated to a classifier , two main strategies can be used, either modifying the classifiers or modifying the input data. In this work, we are facing the problem where we have no access to the values of the learning sample, hence we focus on the methodologies that intend to modify the data in order to achieve fairness.
The main idea is to change the data in order to break their relationship with the protected attribute. This transformation is called repairing the data. For this, [FFM15], [JL17] or [HW17] propose to map the conditional distributions to a common distribution in order to achieve statistical parity as described in (2.2
). The choice of the common distribution in one dimension is described as the distribution obtained by taking the mean of the quantile functions. A total repair of the data amounts to modifying the input variables
building a repaired version, denoted by , such that any classifier will have Disparate Impact at level , with respect to . This means that every classifier used to predict the target class from the new variable will achieve Statistical Parity with respect to . As a counterpart, it is clear that the choice of the distribution to whom the original variables are mapped should convey as much as information possible on the original variable, otherwise it would hamper the accuracy of the new classification. This constraint led some authors to recommend the use of the socalled Wasserstein barycenter. We now present some statistical justifications for this choice and provide some comments on the way to repair the data to obtain fair enough classification rules without modifying too much the original data set.Achieving Statistical Parity amounts to modifying the original data into a new random variable such that the conditional distribution with respect to the protected attribute is the same for all groups, namely
(3.1) 
In this case, any classifier built with such information will be such that
which implies that and so this transformation promotes full fairness of the classification rule. To achieve this transformation, the solution detailed in many papers is to map both conditional distributions and onto a common distribution . Actually, the distribution of the original variables is transformed using a map which depends on the value of the protected attribute
and such that
(3.2) 
Note that the function is random because of its dependence on the binary random variable .
In this framework, the problem of achieving Statistical Parity is the same as the problem of finding a (random) function such that (3.2) holds. As it is represented in Figure 1, if we denote by , our goal is to map these two distributions to a common law .
Consequently, two different problems arise

First of all, the choice of the distribution should be as similar as possible to both distributions and at the same time, in order to reduce the amount of information lost with this transformation and thus still enabling the prediction task using the modified variable instead of the original .

On the other hand, once we have selected the distribution , we have to find the optimal way of transporting and to this new distribution .
From Section 2, the natural distance related to fairness between the two conditional distributions is the total variation distance and that should be used. However, this distance is computationally difficult to handle, hence previous works promote the use of Wasserstein distance which appears as a natural distance to move distributions.
For this, we recall some results on optimal transport theory and Wasserstein metric between probability measures, which provides an appropriate tool for comparing probability distributions. In this framework, the map
will be a random transport plan between the distributions and . Moreover, we will first recall the definition of Wasserstein barycenters which are often chosen in the statistical literature as new distribution .3.2 Wasserstein distance and Wasserstein barycenters
Consider the space of Borel probabilities on
with finite second moment. The related set
will denote the subset of containing the probabilities that are absolutely continuous with respect to Lebesgue measure. Given , we denote by the set of all probability measures over the product set with first (resp. second) marginal (resp. ).The transportation cost with quadratic cost function, or quadratic transportation cost, between these two measures and is defined as
The quadratic transportation cost allows to endow the set with a metric by defining the Wasserstein distance between and as More details on Wasserstein distances and their links with optimal transport problems can be found in [Rac84] or [Vil08] for instance.
In the onedimensional case is simply the distance between the quantile functions of and , enabling its direct computation
A distribution with marginals and which minimizes (3.2) is called an optimal coupling of and . Moreover, if vanishes on sets of dimension , in particular if , then there exists an optimal transport map, , transporting (pushing forward) to . The following Theorem is a convenient version that can be found in [Vil03, Theorem 2.12] .
Theorem 3.1.
Let and let be the joint distribution of a pair of valued random vectors with probability laws and

The probability distribution is an optimal coupling of and if, and only if, there exists a convex lower semicontinuous function a.s. such that is concentrated on the subgradient of .

If we assume that does not give mass to sets of dimension at most , then there is a unique optimal coupling of and , that can be characterized as the unique solution to the Monge transportation problem  an optimal transport map  , i.e.: (or a.s.), and
Such a map is characterized as the unique function that maps to and that is the gradient of a lower semicontinuous funtion . In the following we will use the notation
We point out that the existence of the o.t map is commonly referred to as Brenier’s theorem and originated from Y. Brenier’s work in the analysis and mechanics literature. However, it is worthwile pointing out that a similar statement was established earlier independently in a probabilistic framework by J.A. CuestaAlbertos and C. Matrán [CM89] : they show existence of an optimal transport map for quadratic cost over Euclidean and Hilbert spaces, and prove monotonicity of the optimal map in some sense (Zarantarello monotonicity).
When dealing with a collection of distributions , we can define a notion of variation of these distributions. For any , set
where are positive real numbers such that . This quantity provides a global measure of separation between the probabilities
with respect to fixed weights and has been received recently. Finding the distribution minimizing the variance of the distributions has been tackled when defining the notion of barycenter of distributions with respect to Wasserstein’s distance in the seminal work of
[AC11]. More precisely, given , they provide conditions to ensure existence and uniqueness of the barycenter of the probability measures with weights , i.e. a minimizer of the following criterion(3.3) 
Such a minimizer, , is called a barycenter or Fréchet mean of , with respect to the weights . Empirical versions of the barycenter and their properties are analyzed in [BLGL15] or [LGL17]. Similar ideas have also been developed in [CD14] or [BK12]. Hence the Wasserstein barycenter distribution appears to be a meaningful feature to represent the mean variations of a set of distributions. We point out that its computation is a difficult issue for the general case. Yet, in this work, we only consider barycenter between two probabilities . For the one dimensional case, the solution proposed in [FFM15] to repair the data is to map these distributions to a distribution whose quantile function is defined by taking the mean of the quantile functions of and . This corresponds actually to the minimizer of (3.3) for distributions on the real line denoted by . In the following, we present in Section 5 how to compute a barycenter between two distributions in higher dimensions and propose in Section 4 a justification for using the Wasserstein barycenter to repair the data.
4 Full and Partial Repair with Wasserstein Barycenter
In our particular problem, where we have , the two conditional distributions of the random variable by the protected attribute are going to be transformed into the distribution of the Wasserstein barycenter between and , with weights and , defined as
Let be the transformed variable with distribution . For each , the deformation will be performed through the optimal transport map pushing each towards the weighted barycenter , whose existence is guaranteed as soon as are absolutely continuous with respect to Lebesgue measure using Theorem 3.1, which satisfies
(4.1) 
Remark 4.1.
Note first that in the particular setting of two distributions, the computation of the barycenter of the two measures is equivalent to the computation of the optimal transport map between them. More precisely, if we assume that and denote by the optimal transport map between and , that is , then we can write
where the map is an optimal transport plan, for all . We have that the measure is the weighted barycenter between and , with weights and , respectively. So, the complexity of computing is the same as the complexity of computing .
Remark 4.2.
Note also that for distributions on the real line, we can write the explicit expression of the barycenter based on the exact solution to the optimization problem (4.1). Given and , let
denote the cumulative distribution function of
given that , and its quantile associated function. The weighted Wasserstein barycenter of the two distributions and is the unique minimizer of the functional (3.3) and its quantile function can be computed asMoreover, we note that
and the optimal transport map solution to (4.1) is .
4.1 Total repair
To understand the use of the Wasserstein barycenter distribution as the target distribution for and , we quantify the amount of information lost by replacing the distribution of by the distribution of obtained by transporting these two distributions. Set the random transport plan , and the modified variable . We point out that choosing the distribution of amounts to choose the transportation plans and .
We are facing learning problems in two different settings.

On the one hand, the full information available is the input variables and also the protected variables which play an important role in the classification, since the classifier has a different behavior according to the class and . Hence we let play a role in the decision process since it is associated to , and possibly giving rise to a different treatment for the two different groups. In this case, the classification risk when the full data is available can be computed as , the risk in the prediction of a classification rule , that depends on both variables and , namely

On the other hand, in the repair data, only the modified version of the input data is at hand, Hence learning a classifier amounts to minimizing
Studying the efficiency of the method requires providing a bound for the difference between the minimal risks obtained for the best classifier with input data , and for the best classifier with input data called . These risks are respectively denoted and and then its difference is
Note first that, given and , can be computed by mimicking the usual expression of the 2class classification error as in [BBL04] for instance. We obtain
Denoting the conditional expectation as
(4.2) 
we can write that
Finally, we get
(4.3) 
The minimum risk is thus obtained using the Bayes’ rule leading to
Similarly, the risk related to a classification rule is given by
(4.4) 
Hence, the amount of information lost due to modifying the data is controlled by the following theorem.
Theorem 4.3.
Consider and . Let be a random transformation of such that , and consider the transformed version . For each , assume that the function defined in (4.2) is Lipschitz with constant Then, if ,
(4.5) 
The proof of this theorem which relies on the following lemma is postponed to the Appendix.
Lemma 4.4.
Under Assumptions of Theorem 4.3, the following bound holds
Hence, Theorem 4.3 provides some justification to the use of the Wasserstein barycenter as the distribution of the modified variable. Actually, minimizing the upper bound in (4.5) with respect to the function leads to consider the transport plan carrying the conditional distributions towards their Wasserstein barycenter with weights , that is, . Hence, this provides some understanding on the choice of the Wasserstein barycenter advocated in the work [FFM15]. This leads to the following bound
Yet this bound is only an upper bound which only provides some guidelines on the choice of the choice of the distribution to which the conditional distribution have to be mapped. Nevertheless, choosing the Wasserstein barycenter provides a simply a reasonable and, more important, feasible solution for fairness, to achieve statistical parity.
4.2 Partial repair
As pointed out in previous section, the Total Repair process ensures full fairness but at the expense of the accuracy of the classification. A solution for this could be found in [FFM15], called Geometric Repair. The authors propose not to move all conditional distributions to the barycenter but only towards the barycenter on the Wasserstein’s geodesic path between and . We analyze next this procedure and propose an alternative to this choice.
Let be the parameter representing the amount of repair desired for . Let be a target variable with distribution . Set where is the optimal transport map pushing each towards the target . In the literature, is chosen as the barycenter and the Partially Repaired conditional distributions for are defined as
This procedure is represented in Figure 3. Observe that yields the fully repaired variable, and leaves the conditional distributions unchanged. So the parameter governs how close the distributions are to the barycenter. Choosing the parameter should be a tradeoff between, on the one hand, accuracy of the classification error that leads to little change in the initial distribution, and, on the other hand, non predictability of the protected variable which implies that the two conditional distribution should be close with respect to the total variation distance.
Arguing among the lines of previous section to obtain an upper bound for the classification risk using the two distributions does not lead to a satisfying result. This comes from the fact that we move distributions according to Wasserstein distance, while fairness is measured using the total variation distance and they are of different nature. In fact, the distance in total variation between two probabilities and can be computed as
see, e.g. [Mas07].
So if , this implies that
(4.6) 
Previous bound means that the amount of repair quantified by the parameter does not affect the distance in Total Variation between the modified conditional distributions. Moreover, in some situations, (4.6) turns out to be an equality. Consider, for instance,
as the distributions of in each class. Then, the barycenter is and the partially repaired distributions are
In this particular case, the distance in total variation can be easily computed as
As a consequence, if , which means that the protected attribute could be perfectly predicted from the partially repaired data set for values of arbitrarily close to . Thus, this upper bound provides some argument against the use of this kind of repair since the reparation should favor small distance between these two distributions to ensure a certain desired level of fairness.
Hence, rather than using a displacement along the Wasserstein geodesic between the distributions, we propose the following approach called Random Repair, that enables a better control of their Total Variation distance.
Let be a target variable with general distribution and let be a Bernoulli variable with parameter , independent of . Note that and follow the original conditional distributions and .
Let us consider the following repair procedure which consists in randomly changing the original distribution of the variables by either selecting the target distribution or the original conditional distributions. The choice between both is governed by the Bernoulli parameter . Define for , the repaired distributions
(4.7) 
Note that, similarly as in the Geometric Repair, for and for . Unlike the previous procedure, in this setting, the parameter does play a role in controlling the distance between the repaired distributions
Hence, this bound suggests that should be close to 1 to ensure non predictability of the protected attribute.
Finally, observe that the misclassification error using the Randomly Repaired data is a mixture of the two errors with the totally repaired variable and the original . Thus the use of the Wasserstein barycenter is still justified.
Therefore, in the following we promote the use of Random Repair to enhance Disparate Impact while not hampering too much the efficiency of the classification. This will be studied in the following section.
5 Numerical Analysis of Fair Correction of a database
As the distributions at hand are empirical, the existence of an optimal transport map is not guaranteed and the repair procedure in section 4 that blurs the protected variable in the original data must be adapted. In this section, we propose a new algorithm to carry this out, which in practice, achieves total fairness in contrast with the existing in the literature.
5.1 Computational aspects
Let be an observed sample of , and denote by and the number of instances in each protected class. For ease of exposition and without loss of generality, suppose that the observations are ordered by the value of , so we can write
Generally, the sizes and of the samples and are different and Monge maps may not even exist between an empirical measure to another. This happens when their weight vectors are not compatible, which is always the case when the target measure has more points than the source measure. Hence, the solution to the optimal transport problem does not correspond to finding an optimal transport map, but an optimal transport distribution. The cuadratic cost function becomes discrete as it can be written as a matrix , with . When and , the Wasserstein distance between them is the squared root of the optimum of a network flow problem known as the transportation problem. It consists in finding a matrix which minimizes the transportation cost between the two distributions as follows
(5.1) 
If
is a solution to the linear program (
5.1) then, acordingly to Remark 4.1, the distributionis a barycenter of and with respect to weights and . See [CD14] for details on the discrete Wasserstein and Optimal Transport computation.
5.1.1 Total repair
In practice, the implementation of the repair scheme in section 4 is based on the transport matrix from to . As we have pointed out, in this transport scheme the major difficulty comes from the fact that the sizes of these sets are different and the transport is not a onebyone mapping. Each point in the source set could be transported (with weights) into several points of the target, or various points in the source could be moved into the same point of the target. As a consequence, we must adapt the algorithm that produces the repaired data set, denoted by . In the following, we detail two different methods, of which the first one is similar to some existing in the literature and does not achieve total fairness in the practical framework, while the second one is a novelty and does guarantee this property for the new data .

On the one hand, as depicted in Figure 4 (left), each original point in is changed by a unique point given by
Doing this, the set will be a collection of exactly points. This approach generalizes to higher dimensions the idea of previous works [FFM15] and [JL17], who just considere the unidimensional case, where the transport is written in terms of the distribution funtions. However, in practise it generates two sets and that are not the same and do not reach (3.1).

To ensure total fairness, each point cannot be changed by a unique repaired point. Hence, each point will split its mass to be transported into several modified versions, generating an extended set , which is formed by the complete distribution . More precisely, as represented in Figure 4 (right), for every if we define two points
Comments
There are no comments yet.