Confidence Intervals for Testing Disparate Impact in Fair Learning

07/17/2018 ∙ by Philippe Besse, et al. ∙ 0

We provide the asymptotic distribution of the major indexes used in the statistical literature to quantify disparate treatment in machine learning. We aim at promoting the use of confidence intervals when testing the so-called group disparate impact. We illustrate on some examples the importance of using confidence intervals and not a single value.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the generalization of machine learning algorithms in a large variety of fields, their impact on human life is growing over the years. Originally designed to improve recommendation systems in the internet industry, they are now widely used in a large number of very sensitive areas such as medicine, human ressources, banks and insurance, criminal justice risk assessment, see for instance in [Romei & Ruggieri(2014)], [Berk et al.(2017)Berk, Heidari, Jabbari, Kearns & Roth] [Pedreschi et al.(2012)Pedreschi, Ruggieri & Turini] or [Friedler et al.(2018)Friedler, Scheidegger, Venkatasubramanian, Choudhary, Hamilton & Roth] and references therein. Meant to take automatic accurate and efficient decisions mimicking and even outmatching human expertise, machine learning algorithms may exhibit discriminatory behaviours in the sense that groups of populations are treated in distinct ways. Even if some discrimination may appear naturally and could be thought as acceptable (see for instance in [Kamiran et al.(2010)Kamiran, Calders & Pechenizkiy]), quantifying the effect of machine learning with respect to a given situation is of high importance. This notion of fairness in machine learning algorithms has received a growing interest over the last years and is crucial in order to guarantee a fair treatment to all population. Moreover, enhancing fairness can also contribute to a better trust for machine learning algorithms in the whole population. Yet providing a definition of fairness or equity in machine learning algorithms is a complicated task and several propositions have been formulated. We will focus on the issue of biased training data, which is one of the several possible causes of discriminatory outcomes in machine learning. First described in terms of law [Winrow & Schieber(2009)], fairness is now quantified in order to detect lack of fairness in automatic algorithms. According to the objectives, quantitative measures of fairness have been designed but these measures rest on unstated assumptions about fairness in society. Fairness is often defined with respect to selected attributes called protected attribute which represents a discriminatory information in the population that should not be used or retrieved in the algorithm decision. Among all these criteria, two main categories have been considered. The first one deals with the repartition of a decision rule with respect to the protected attribute. This point of view gives rise to the Disparate Impact described for instance in [Feldman et al.(2015)Feldman, Friedler, Moeller, Scheidegger & Venkatasubramanian]. The second one tackles the issue of disparate error rates of the algorithmic decisions between the different groups of the population. This point of view has been originally proposed for recidivism of defendants in [Flores et al.(2016)Flores, Bechtel & Lowenkamp]. Many others criteria (see for instance in [Berk et al.(2017)Berk, Heidari, Jabbari, Kearns & Roth] for a review) have been proposed leading sometimes to incompatible formulations as stated in [Chouldechova(2017)]. Note finally that the notion of fairness is closely related to the notion of privacy as pointed out in [Dwork et al.(2011)Dwork, Hardt, Pitassi, Reingold & Zemel]

. Our goal in this paper is not to discuss the criterions chosen but rather to promote the use of confidence intervals to control the risk of false discriminatory assessment. If many criteria have been described in the literature of fair learning, they are often used as a score without statistical control. In the cases where test procedures or confidence bounds are provided, they are obtained using a resampling scheme to get standardized Gaussian confidence intervals under a Gaussian assumption which does not correspond to the distribution of the observations. Hence in this work, we provide the exact asymptotic distribution of the estimates of some fairness criteria obtained through the classical approach of the Delta method described in

[Van der Vaart(1998)].

2 Quantifying unfair treatment using Statistical Criteria

Even if unfairness in machine learning is a recent topic, many criteria have been already considered in order detect unfair algorithmic treatment.
In the literature, detecting unfair treatment can be first be achieved by looking at individual outcomes and measuring how different they might be for similar persons. For this, one may be interested by quantifying how dissimilar outcomes may be encountered in a neighbourhood of a person that might have suffered a biased decision. This is usually achieved by looking at the prediction of the algorithm for two individuals in every characteristic except the value of a variable which may lead to possible disparate treatment. Yet such measures may be very unstable and not representative of the whole behaviour of the decision rule.
For these reasons, statistical measures that detect group discriminations or group disparate treatment by an algorithm have been recently introduced to assess unfair algorithmic treatment with respect to a variable called protected variable.

Actually, the statistical model is the following. The problem consists in forecasting a binary variable

, using observed covariates We assume moreover that the population can be divided into two categories that represent a bias, modeled by a variable . This variable is called the protected attribute, which takes the values for the “minority”class and supposed to be the unfavored class, and for the “default”, and usually favored class. represents the group we wish to protect from discrimination, and the bias represents the degree to which they have been discriminated against. Note that in the case where is not a binary variable but multidimensional and multi-class, we can perform several tests identifying in each case a less favored case. We introduce also a notion of positive prediction in the sense that represents a success while is a failure. Hence the classification problem aims at predicting a success from the variables

, using a family of binary classifiers

. For every , the outcome of the classification will be the prediction .

The different frameworks considered in the statistical literature intend to quantify the distance between the outcome of the algorithm and an ideal situation where decisions should not be impacted by the protected variable. All criteria amount to measure how the decision is correlated to the clustering of the population obtained using the protected variable. Yet they differ depending on the observations which are available to the statistician, which gives rise to the following criteria.

  • When considering a database made of observations and a variable to be predicted by a classifier , the disparate impact measures how the two labels are spread between the subgroups defined by the variable . Namely, knowing the decision and the protected variable , the disparate impact assessment DIA of this classifier is defined as

    This quantity quantifies how far a classifier is from the ideal situation, called the Statistical parity where

    This means that the probability of a successful outcome is the same across the groups. For instance, if we consider that the protected variable represents gender, the value

    would be assigned to “female”and to “male”, we would say that the algorithm used by a company achieves Statistical Parity if a man and a woman have the same probability of success (for instance being hired or promoted). The classifier is said to have Disparate Impact at level , with respect to , if . Note the Disparate Impact of a classifier measures its level of fairness: the smaller the value of , the less fair it is. The classification rules considered in this framework are such that because we are assuming that the default class is more likely to have a successful outcome. Thus, in the definition, the level of fairness takes values . We point out that the value , which is also known in the literature as the rule has been cited as a legal score to decide whether the discrimination of the algorithm is acceptable or not (see for instance in [Feldman et al.(2015)Feldman, Friedler, Moeller, Scheidegger & Venkatasubramanian]). This rule can be explained as “for every 5 individuals with successful outcome in the majority class, 4 in the minority class will have a successful outcome too”.
    In what follows, to promote fairness, it will be useful to state the definition in the reverse sense. A classifier does not have Disparate Impact at level , with respect to , if .

    Within this framework, the disparate impact assessment can be compared to the Disparate Impact

    which quantifies the same bias but on the true distribution of the label . It can be useful to determine whether a decision rule increases or not the discrimination that exists in the learning sample.

  • When , and can be observed, other criterion can be used to give an insight on possible unfairness.

    For this, fairness can be defined as the situation where the accuracy of the classification process is the same for both groups. We point out that this is equivalent to assess that the false negative rate and the false positive rate are the same for both groups. The mathematical formalization of this statement is given by

    This situation is called Conditional Procedure Accuracy Equality. We point out that this is equivalent to quantify the amount of fairness. For this we consider for a given classifier

    the quantity that false negative rate and false positive rate are the same for both groups, which is also called equalized odds. This framework was developed in the COMPAS controversy, see for instance in

    [Flores et al.(2016)Flores, Bechtel & Lowenkamp] or [Angwin et al.(2016)Angwin, Larson, Mattu & Kirchner] to forecast recidivism of prisoners. When an offender is eligible for parole, judges assess the likelihood that the offender will re-offend after being released as part of the parole decision. Many jurisdictions now use automated prediction methods like the COMPAS score which are taylored to get a balanced Disparate Impact. Yet the disparate treatment may arise from a disparity in the errors of the decision rule.
    To quantify the difference with respect to this fair situation we will consider the following quantities

  • In the same setting where are observed, another criterion of fairness is given by the Conditional Use Accuracy Equality. This amounts to define fairness as the situation where the use, conditionally to the algorithm outcome, is the same for both groups.

    This criterion corresponds to the case where the difference between the use of the classifier and the real label is measured between both groups. Here again we will use the following criteria to assess the gap between the data and the conditional use accuracy equality

    This criteria may be seen close to the previous case but it focuses on the odds of each individual while previous is more devoted to the analysis of the classification algorithm.

Such criteria are imperfect and may fail to capture all aspects of fairness. In particular, it is easy to achieve statistical parity simply by flipping the labels of an arbitrary set of individuals in the protected class or randomly repairing the data by pushing a random part of the data towards the so-called Wasserstein barycenter of the data as described in [del Barrio et al.(2018)del Barrio, Gamboa, Gordaliza & Loubes]. Yet they provide some quantification of a level of unfair treatment and give some insights about the disparate treatment received by the different groups

3 Testing lack of fairness and confidence intervals

Let be a random sample of independent and equally distributed variables. Previous criterion can be consistently estimated by their empirical version. Yet the value of the criterion may depend on the data sample. Due to the importance of obtaining an accurate proof of unfairness in a decision rule it is important to obtain confidence intervals in order to control the error of detecting unfairness. In the literature it is often achieved by computing the mean over several sampling of the data. We provide in the following the exact asymptotic behaviors of the estimates in order to build confidence intervals.

Theorem 3.1 (Asymptotic behavior of the Disparate Impact Assessment estimator).

Set the empirical estimator of DI(g) as

Then the asymptotic distribution of this quantity is given by

(3.1)

where and

where we have denoted and .

Hence, we can provide a confidence interval when estimating the disparate impact over a data set. Actually is a confidence interval for the parameter asymptotically of level .
Previous theorem can be used to test the presence of disparate impact at a given level.

(3.2)

aims at checking if has Disparate Impact at level . We want to check wether . Under , the inequality holds, and so

Finally, from the inequality above and (3.1), we have that

and, equivalently,

where is the

-quantile of

. In conclusion, the test rejects at level when

The proof of this theorem is quite classical and is postponed to the Appendix.When dealing with Conditional Accuracy, we want to study the asymptotic behavior of the estimators of the rates of the True Positives and True Negatives across both groups. The reasoning is similar for the two quantities and , so we will only show the convergence of the True Positive Assessment estimator.

Theorem 3.2.

Set an estimate of

Then, the assymptotic distribution of this quantity is given by

(3.3)

where and

where we have denoted and

Again, we will give the theorem that establish the asymptotic behaviour of the estimator, noting that the corresponding to the other is analogously analyzed.

Theorem 3.3.

Set an estimate of

Then, the assymptotic distribution of this quantity is given by

(3.4)

where and

where we have denoted and

The proof of this theorem is similar to the one of Theorem 3.2 and is omitted.

4 Using confidence Intervals for real dataset

To illustrate these tests we first consider the Adult Income data set. It contains instances consisting in the values of attributes, numeric and categorical, and a categorization of each person as having an income of more or less than per year. This attribute will be the target variable in the study. We have access to the whole information, the variables , the true observed variable . Two variables can be considered as protected : the sex and the origin. We first estimate the Disparate Impact that describes the discrimination in the learning sample with respect to both variables. This score will describe how the learning sample presents a group discrimination either due to the selection of the sample or the discrimination present in the whole population. Define (respectively )the Disparate Impact with respect to the sex variable such that corresponds to female while corresponds to male (respectively with respect to the origin variables corresponds to foreign origin while corresponds to native). We get the following values with their corresponding confidence interval at level .

If we consider the threshold described in the proofs of discriminate behaviors in the former USA trials (see for instance in [Mercat-Bruns(2016)] and references therein), the discriminate impact should be greater than 0.8 to guarantee no discriminate impact. Hence in the situation, both variables generate discrimination, in a more severe way for the sex than for the origin.

We now consider learning algorithms to predict the variable of interest and study the disparate impact of these decision rules. For this we consider either a logit model or a random forest built with all variables including the protected variables and optimized using cross validation methods. We obtain the following results,

We can see that in both cases, the algorithms enforce the discrimination by having smaller disparate impact than for the true variable. Actually classification algorithms aimed at discriminating the population, enhancing the bias found in the sample. Hence previous algorithms are unfair in the sense that discrimination is reinforced.
Then, we process the same calculations with algorithms built without using the protected variables, which could correspond to a naive answer to promote fairness.

Even if the disparate impact is improved very slightly, we observe that the changes in the disparate impact and their confidence intervals are not statistically significant. So discarding the protected variables when building the model does not improve fairness of the predictor. Hence social determinism is stronger than the protected arguments. A woman or a non caucasian person is expected to earn less whatever its education level. This justifies the use of fairness mathematical methods to reduce disparate treatment as discussed in [Kleinberg et al.(2016)Kleinberg, Mullainathan & Raghavan] or [del Barrio et al.(2018)del Barrio, Gamboa, Gordaliza & Loubes] for instance. The second data set is German Credit Data Set. This dataset is often claimed to exhibit some origin discrimination in the success of being given a credit by the German bank. Hence we compute the disparate impact w.r.t Origin. We obtain

Hence here confidence intervals play an important role. Actually the disparate impact is not statistically significantly lower than 0.8, which entails that the discrimination of the decision rule of the German bank can not be shown, which promotes the use of a proper confidence interval. A third data set is composed by the data of the controversial COMPAS score detailed in [Dieterich et al.(2016)Dieterich, Mendoza & Brennan]. The data is composed of 7214 offenders with personal variables observed over two years. A score predicts their level of dangerosity which determines whether they can be released while a variable points out if there has been recidivism. Hence Recidivism of offenders is predicted using a score and confronted to possible racial discrimination which corresponds to the protected attribute. The protected variable separates the population into caucasian and non caucasian. To evaluate the level of discrimination we first compute the disparate impact with respect to the true variable and the COMPAS score seen as a predictor.

In both cases, the data are biased but the level of discrimination is low. Yet as mentioned in al the studies on this data set, the level of errors of prediction is significantly different according to the ethnic origin of the defender. Actually the conditional accuracy scores and their corresponding confidence intervals show clearly the unbalance treatment received by both populations.

This unbalanced treatment is clearly assesed with the confidence interval.

5 Conclusions

Quantifying the level of fairness of a learning sample or of an algorithm is a difficult task since the points of views may differ to define the notion of disparate treatment. Yet, when dealing with the main indexes that has been used, it is important as in any statistical analysis to obtain a confidence interval at given level and not a single numerical value. For this we provided the asymptotic distribution of the estimates of three major fairness indexes in order to promote their use in assessing unfair treatment in machine learning algorithms.

6 Appendix

Proof of Theorem 3.1

Proof.

Consider for

the random vectors

where and . Thus, has expectation

The elements of the covariance matrix of are computed as follows:

and finally,

From the Central Limit Theorem in dimension 4, we have that

Now consider the function

Applying the Delta-Method (see in [Van der Vaart(1998)]) for the function , we conclude that

where . ∎

Proof of Theorem 3.2

Proof.

The proof follows the same guidelines of previous proof. We set here

where and . From the Central Limit Theorem, we have that

with

(6.1)

Now consider the function

Applying the Delta-Method for the function , we conclude that

where and . ∎

References

  • [Angwin et al.(2016)Angwin, Larson, Mattu & Kirchner] Angwin, J, Larson, J, Mattu, S & Kirchner, L (2016), ‘Machine bias: There?s software used across the country to predict future criminals. and it?s biased against blacks.’ ProPublica.
  • [Berk et al.(2017)Berk, Heidari, Jabbari, Kearns & Roth] Berk, R, Heidari, H, Jabbari, S, Kearns, M & Roth, A (2017), ‘Fairness in criminal justice risk assessments: the state of the art,’ arXiv preprint arXiv:1703.09207.
  • [Chouldechova(2017)] Chouldechova, A (2017), ‘Fair prediction with disparate impact: A study of bias in recidivism prediction instruments,’ ArXiv e-prints.
  • [del Barrio et al.(2018)del Barrio, Gamboa, Gordaliza & Loubes] del Barrio, E, Gamboa, F, Gordaliza, P & Loubes, JM (2018), ‘Obtaining fairness using optimal transport theory,’ arXiv preprint arXiv:1806.03195.
  • [Dieterich et al.(2016)Dieterich, Mendoza & Brennan] Dieterich, W, Mendoza, C & Brennan, T (2016), ‘Compas risk scales: Demonstrating accuracy equity and predictive parity,’ Northpoint Inc.
  • [Dwork et al.(2011)Dwork, Hardt, Pitassi, Reingold & Zemel] Dwork, C, Hardt, M, Pitassi, T, Reingold, O & Zemel, R (2011), ‘Fairness Through Awareness,’ ArXiv e-prints.
  • [Feldman et al.(2015)Feldman, Friedler, Moeller, Scheidegger & Venkatasubramanian] Feldman, M, Friedler, SA, Moeller, J, Scheidegger, C & Venkatasubramanian, S (2015), ‘Certifying and removing disparate impact,’ in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, pp. 259–268.
  • [Flores et al.(2016)Flores, Bechtel & Lowenkamp] Flores, AW, Bechtel, K & Lowenkamp, CT (2016), ‘False positives, false negatives, and false analyses: A rejoinder to machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks,’ Fed. Probation, 80, p. 38.
  • [Friedler et al.(2018)Friedler, Scheidegger, Venkatasubramanian, Choudhary, Hamilton & Roth] Friedler, SA, Scheidegger, C, Venkatasubramanian, S, Choudhary, S, Hamilton, EP & Roth, D (2018), ‘A comparative study of fairness-enhancing interventions in machine learning,’ ArXiv e-prints.
  • [Kamiran et al.(2010)Kamiran, Calders & Pechenizkiy]

    Kamiran, F, Calders, T & Pechenizkiy, M (2010), ‘Discrimination aware decision tree learning,’ in

    2010 IEEE International Conference on Data Mining, pp. 869–874, doi:10.1109/ICDM.2010.50.
  • [Kleinberg et al.(2016)Kleinberg, Mullainathan & Raghavan] Kleinberg, J, Mullainathan, S & Raghavan, M (2016), ‘Inherent trade-offs in the fair determination of risk scores,’ arXiv preprint arXiv:1609.05807.
  • [Mercat-Bruns(2016)] Mercat-Bruns, M (2016), Discrimination at Work, University of California Press.
  • [Pedreschi et al.(2012)Pedreschi, Ruggieri & Turini] Pedreschi, D, Ruggieri, S & Turini, F (2012), ‘A study of top-k measures for discrimination discovery,’ in Proceedings of the 27th Annual ACM Symposium on Applied Computing, ACM, pp. 126–131.
  • [Romei & Ruggieri(2014)] Romei, A & Ruggieri, S (2014), ‘A multidisciplinary survey on discrimination analysis,’

    The Knowledge Engineering Review

    , 29(5), p. 582?638, doi:10.1017/S0269888913000039.
  • [Van der Vaart(1998)] Van der Vaart, AW (1998), Asymptotic statistics, vol. 3, Cambridge university press.
  • [Winrow & Schieber(2009)] Winrow, BP & Schieber, C (2009), ‘The disparity between disparate treatment and disparate impact: An analysis of the ricci case,’ Academy of Legal, Ethical and Regulatory Issues, p. 27.