On Fairness, Diversity and Randomness in Algorithmic Decision Making

06/30/2017
by   Nina Grgić-Hlača, et al.
0

Consider a binary decision making process where a single machine learning classifier replaces a multitude of humans. We raise questions about the resulting loss of diversity in the decision making process. We study the potential benefits of using random classifier ensembles instead of a single classifier in the context of fairness-aware learning and demonstrate various attractive properties: (i) an ensemble of fair classifiers is guaranteed to be fair, for several different measures of fairness, (ii) an ensemble of unfair classifiers can still achieve fair outcomes, and (iii) an ensemble of classifiers can achieve better accuracy-fairness trade-offs than a single classifier. Finally, we introduce notions of distributional fairness to characterize further potential benefits of random classifier ensembles.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/31/2017

Subjective fairness: Fairness is in the eye of the beholder

We analyze different notions of fairness in decision making when the und...
05/19/2020

Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy

Fairness concerns about algorithmic decision-making systems have been ma...
11/29/2017

Paradoxes in Fair Computer-Aided Decision Making

Computer-aided decision making, where some classifier (e.g., an algorith...
08/17/2021

FARF: A Fair and Adaptive Random Forests Classifier

As Artificial Intelligence (AI) is used in more applications, the need t...
11/01/2018

A Neural Network Framework for Fair Classifier

Machine learning models are extensively being used in decision making, e...
05/27/2022

Prototype Based Classification from Hierarchy to Fairness

Artificial neural nets can represent and classify many types of data but...
10/18/2019

Optimization Hierarchy for Fair Statistical Decision Problems

Data-driven decision-making has drawn scrutiny from policy makers due to...

1 Introduction

A number of recent works have examined fairness concerns arising from the recent trend of replacing human decision makers with systems based on machine learning in scenarios ranging from recidivism risk estimation 

[3, 6, 15] and welfare benefit eligibility [13] to loan approvals and credit scoring [9]. However, these studies have largely overlooked the implicit loss in decision process diversity that results from replacing a large number of human decision makers, each of whom might have their own distinct decision criteria, with a single decision making algorithm.

When humans make decisions, the decision process diversity is inevitable due to our limited cognitive capacities. For instance, no single human judge can possibly estimate recidivism risk for all criminals in a city or country. Consequently, in practice, individual cases are assigned to a randomly selected sub-panel of one or more randomly selected judges [1, 2]. Random assignment is key to achieving fair treatment, as different sub-panels of human judges might make decisions differently and each case should have an equal chance of being judged by every possible sub-panel.

In contrast, a single decision making algorithm can be scaled easily to handle any amount of workload by simply adding more computing resources. Current practice is to replace a multitude of human decision makers with a single algorithm, such as COMPAS for recidivism risk estimation in the U.S. [3] or the algorithm introduced by the Polish Ministry of Labor and Social Policy, used for welfare benefit eligibility decisions in Poland [13]. However, we remark that one could introduce diversity into machine decision making by instead training a collection of algorithms (each might capture a different “school of thought” [14] as used by judges), randomly assigning a case to a subset, then combining their decisions in some ensemble manner (, simple or weighted majority voting, or unanimous consensus). Another motivation for exploring such approaches is the rich literature on ensemble learning, where a combination of a diverse ensemble of predictors may been shown (both theoretically and empirically) to outperform single predictors on a variety of tasks [5].

Against this background, we explore the following question: for the purposes of fair decision making, are there any fundamental benefits to replacing a single decision making algorithm with a diverse ensemble of decision making algorithms? In this paper, we consider the question in a restricted set of scenarios, where the algorithm is a binary classifier and the decisions for any given user are made by a single randomly selected classifier from the ensemble. While restrictive, these scenarios capture decision making in a number of real-world settings (such as a randomly assigned judge deciding whether or not grant bail to an applicant) and reveal striking results.

Our findings, while preliminary, show that compared to a single classifier, a diverse ensemble can not only achieve better fairness in terms of distributing beneficial outcomes more uniformly amongst the set of deserving users, but can also achieve better accuracy-fairness trade-offs for existing notions (measures) of unfairness such as disparate treatment [12, 16, 17], impact [7, 16, 17], and mistreatment [9, 15]. Interestingly, we find that for certain notions of fairness, a diverse ensemble is not guaranteed to be fair even when individual classifiers within the ensemble are fair. On the other hand, a diverse ensemble can be fair even when the individual classifiers comprising the ensemble are unfair. Perhaps surprisingly, we show that it is this latter property which enables a diverse ensemble of individually unfair classifiers to achieve better accuracy-fairness trade-offs than any single classifier.

Our work suggests that further research in the area of ensemble-based methods may be very fruitful when designing fair learning mechanisms.

2 Fairness of classifier ensembles

We first introduce our ensemble approach (randomly selecting a classifier from a diverse set) and various notions of fairness in classification, then demonstrate interesting, and perhaps surprising, properties of the ensemble classifiers. We assume exactly one sensitive attribute (, gender or race) which is binary, though the results may naturally be extended beyond a single binary attribute.

Assume we have an ensemble of individual classifiers , operating on a dataset . Here, , and

respectively denote the feature vector, class label and sensitive attribute value of the

user. Each classifier maps a given user feature vector to a predicted outcome , ,

. We assume we are given a probability distribution

over the classifiers. Overloading notation, we consider the ensemble classifier defined to operate on by first selecting some independently at random according to the distribution , and then returning . 111Equivalently, this may be considered an ensemble approach where is computed , then we randomly output 1 or with respective probabilities or .

Two common notions used to assess the fairness of a decision making system require that a classifier should provide [4]: (1) Equality of treatment, , its prediction for a user should not depend on the user’s sensitive attribute value (, man, woman); and/or (2) Equality of impact, , rates of beneficial outcomes should be the same for all sensitive attribute value groups (, men, women). For (2), various measures of beneficial outcome rates have been proposed: acceptance rates into the positive (or negative) class for the group [7, 16, 17]; the classifier’s true positive (or negative) rate for the group [9, 11, 15]; or the classifier’s predictive positive (or negative) rate—also called positive (or negative) predictive value—for the group [11, 15]. For a discussion on these measures, see [11, 15].

2.1 Is an ensemble of fair classifiers guaranteed to be fair? In many cases, yes.

For any ensemble consisting of classifiers as above, it is immediate to see that if all satisfy equality of treatment, then also satisfies equality of treatment.

Next, one can easily show that if all satisfy equality of impact (, equality of beneficial outcome rates), where the beneficial outcome rates are defined as the acceptance rate into the positive (negative) class, or the true positive (negative) rate, then will also satisfy the equality of impact. For example, if beneficial outcome rates are defined in terms of acceptance rate into the positive class, and expected benefits are the same for all :

where is the indicator function, then one can show that:

using linearity of expectation since all expectations are defined over groups of constant sizes (left hand side defined over group with and right hand side over ). The same can be shown when beneficial outcome rates are defined in terms of true positive (negative) rates. That is, for the true positive rate, if it holds that:

one can show that:

On the other hand, this no longer holds if beneficial outcome rates are defined in terms of positive (negative) predictive value, since these values are computed as expectations over the size of the predicted positive or negative class of a classifier . Specifically, the expected positive predictive value of a classifier for group with is defined as:

Since the expectation is defined over , which changes for every , we can no longer apply linearity of expectation, and hence will not in general satisfy this notion of equality of impact even when all do have this property.

2.2 Can an ensemble of unfair classifiers be fair? Yes.

For all fairness notions of equality of treatment or equality of impact described above, there exist cases where a random ensemble of unfair classifiers can indeed be fair. Here we show examples of such cases for equality of treatment, and for equality of impact (or the equality of beneficial outcome rate) where the benefit outcome rate is defined as the fraction of users from a sensitive attribute value groups (, men, women) accepted into the positive class [7, 16, 17]. Examples where the benefit measure is defined in terms or error rates [9, 15] can be similarly constructed.

Equality of treatment.

Consider the example shown in Figure 1 which shows a decision making scenario involving two sensitive attribute value groups, men and women, and two classifiers and . The equality in treatment fairness criterion requires that a classifier must treat individuals equally regardless of their sensitive attribute value (, regardless of whether the subject being classified is a man or a woman). Observe that neither nor satisfies this criterion, since each accepts only women or men, respectively. On the other hand, an ensemble of these classifiers that chooses and uniformly at random satisfies equality of treatment.

Equality of impact.

We provide an example in Figure 2 where the impact fairness benefit measure is the rate of acceptance into the positive class. Comparing the group benefits given by and , both classifiers fail the fairness criterion since they have different positive class acceptance rates for men and women (shown in the figure). However, an ensemble which selects with probability and with probability , achieves the same acceptance rate for both women and men (since ).

Figure 1: A fictitious decision making scenario involving two groups of people: men (m) and women (w); a single feature: , which is gender in this case; and two classifiers: and . The classifiers and do not satisfy equality of treatment because their outcomes solely depend on the user’s sensitive attribute value, , () classifies all women (all men) into the positive class while classifying all men (all women) into the negative class. On the other hand, an ensemble of these classifiers that chooses classifier and with probability each is fair because its decisions would not change based on the users’ gender.
Figure 2: A decision making scenario involving two groups of people: men (m) and women (w); two features: and ; and three classifiers: , and . Green quadrants indicate the ground truth positive class in the training data, while red quadrants indicate the respective negative class. Within each quadrant, the points are distributed uniformly. Gender is not one of the features ( and ) used by the classifiers. Classifiers and do not meet the equality of impact criterion (when group benefits are measured as rates of positive class acceptance) since they assign only men and only women to the positive class, respectively. is a fair classifier by this measure, since it gives both men and women the same positive class acceptance rate. Let be an ensemble that selects classifier with probability , and classifier with probability . The ensemble, while consisting of unfair classifiers, produces outcomes that are fair: it has the same positive class acceptance rate for both men and women.

2.3 Can an ensemble of classifiers achieve better accuracy-fairness trade-offs than a single classifier? Yes.

First, observe that by its definition, the accuracy of is the expectation over the classifier probabilities of the accuracies of the individual classifiers .

When an individual classifier is optimized for accuracy subject to a fairness constraint, a significant loss of accuracy relative to the optimal unconstrained classifier may be unavoidable. If an ensemble is used instead, then we expand our model class to admit combinations of several unfair classifiers, some of which may have significantly higher accuracy than the optimal fair classifier —requiring only that the ensemble classifier be fair.

We provide an example in Figure 3. We consider fairness as determined by equality of rates of positive class acceptance for men and women. Given the distribution of data shown, for a single classifier to be fair, it must be either at the extreme left (everyone is classified as positive) or at the extreme right (everyone is classified as negative)—in either case with accuracy of , which in this example is the optimal achievable accuracy for a single fair classifier.

Now consider an ensemble of the two classifiers and shown, selecting either one with probability . This ensemble satisfies the fairness criterion (with positive rates of for each sex) and has an accuracy of , which is much better than the single fair classifier optimum of .

Figure 3: A decision making scenario involving one feature , and three classifiers: , and . A higher value of indicates the positive class (green) in the training data for men, but the negative (red) class for women. In this scenario, no individual linear classifier can outperform accuracy, if we require equal benefits for both groups (where benefits are measured as rates of positive class acceptance). However, an ensemble of and which selects each of them with probability, achieves fairness (equality in benefits) with a much better accuracy of .

2.4 Notions of distributional fairness

The behavior of an ensemble classifier differs from its constituent classifiers in subtle but important ways. In particular, for data points (or individual users) on which the constituent classifiers yield different outcomes, our approach of randomly selecting a single classifier introduces non-determinism in classifier output, , there is a non-zero chance of both beneficial and non-beneficial outcomes.

Figure 4: Classifiers and satisfy equality of impact, since their beneficial outcome rates (defined as the rates of positive class acceptance) are the same for men and women. Consider an ensemble of the two classifiers which chooses each of and uniformly at random. The ensemble also satisfies equality of impact, yet the distribution of beneficial outcomes is very different among men and women: half the men (top right quadrant) always get the positive outcome, while half the men (bottom left) always get the negative outcome; whereas every woman gets the positive outcome randomly with probability 0.5.

We illustrate this scenario in Figure 4, showing two classifiers and , where each has fair impact in that both and assign beneficial outcomes (positive class outcomes in this case) deterministically to 50% of men and 50% of women. However, the classifiers differ in terms of the set of women that are assigned the beneficial outcomes. By creating a ensemble of both classifiers, we ensure instead that all women have an equal (50%) chance of the beneficial outcome (while we still satisfy equality in impact).

This suggests an interesting question: Are the outcomes of the ensemble classifier more fair than those of the individual classifiers ( and ) that comprise it?

While all these classifiers satisfy the equality in impact fairness constraint, one could make the case that the ensemble is more fair as it offers all women an equal chance at getting beneficial outcomes, whereas and pre-determine the subset of women who will get the beneficial outcomes.

To our knowledge, no existing measure of algorithmic fairness captures this notion of evenly distributing beneficial outcomes across all members of an attribute group. Rather, existing fairness measures focus on fair assignment of outcomes between sensitive groups (inter-group fairness), while largely ignoring fair assignment of outcomes within a sensitive group (intra-group fairness).

These observations suggest the need in future work for new notions of distributional fairness to characterize the benefits achievable with diverse classifier ensembles.

3 Discussion

We have begun to explore the properties of using a random ensemble of classifiers in fair decision making, focusing on randomly selecting one classifier from a diverse set. It will be interesting in future work to explore a broader set of ensemble methods. Fish et al. [8] examined fairness when constructing a deterministic classifier using boosting, but we are not aware of prior work in fairness which considers how randomness in ensembles may be helpful.

We note a similarity to a Bayesian perspective: rather than aiming for the one true classifier, instead we work with a probability distribution over possible classifiers. An interesting question for future work is how to update the distribution over classifiers as more data becomes available, noting that we may want to maintain diversity [10].

Decision making systems consisting of just one classifier facilitate the ability of users to game the system. On the other hand, in an ensemble scheme such as the one we consider where a classifier is randomly selected: if an individual aims to achieve some high threshold level of probability of a good classification, first she must acquire knowledge about the whole set of classifiers and the probability distribution over them, and then she must attain features some distance beyond the expected decision boundary of the ensemble (‘a fence around the law’).

A common notion of fairness is that individuals with similar features should obtain similar outcomes. However, a single deterministic classifier boundary causes individuals who are just on either side to obtain completely different outcomes. Using instead a distribution over boundaries leads to a smoother, more robust profile of expected outcomes, highlighting another useful property of ensembles in the context of fair classification.

Acknowledgements

AW acknowledges support by the Alan Turing Institute under EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI.

References

  • [1] How are Cases Assigned and Scheduled in the Land Court? http://www.mass.gov/courts/court-info/trial-court/lc/lc-single-judge-case-assgn-gen.html.
  • [2] Order for Assignment of Cases. http://www.mnd.uscourts.gov/cmecf/Order-for-Assignment-of-Cases.pdf.
  • [3] J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, 2016.
  • [4] S. Barocas and A. D. Selbst. Big Data’s Disparate Impact. California Law Review, 2016.
  • [5] G. Brown, J. Wyatt, R. Harris, and X. Yao. Diversity Creation Methods: A Survey and Categorisation. Information Fusion, 2005.
  • [6] A. Chouldechova. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. arXiv:1610.07524, 2016.
  • [7] M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and Removing Disparate Impact. In KDD, 2015.
  • [8] B. Fish, J. Kun, and A. D. Lelkes. A Confidence-Based Approach for Balancing Fairness and Accuracy. In SDM, 2016.
  • [9] M. Hardt, E. Price, and N. Srebro.

    Equality of Opportunity in Supervised Learning.

    In NIPS, 2016.
  • [10] H.-C. Kim and Z. Ghahramani. Bayesian Classifier Combination. In AISTATS, 2012.
  • [11] J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In ITCS, 2017.
  • [12] B. T. Luong, S. Ruggieri, and F. Turini. kNN as an Implementation of Situation Testing for Discrimination Discovery and Prevention. In KDD, 2011.
  • [13] J. Niklas, K. Sztandar-Sztanderska, and K. Szymielewicz. Profiling the Unemployed in Poland: Social and Political Implications of Algorithmic Decision Making. https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf, 2015.
  • [14] P. Welinder, S. Branson, S. J. Belongie, and P. Perona. The Multidimensional Wisdom of Crowds. In NIPS, volume 23, pages 2424–2432, 2010.
  • [15] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. In WWW, 2017.
  • [16] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness Constraints: Mechanisms for Fair Classification. In AISTATS, 2017.
  • [17] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning Fair Representations. In ICML, 2013.