Machine learning techniques are being increasingly applied in making important societal decisions, such as criminal risk assessment, school admission, hiring, sanctioning of loans, etc. Given the impact and sensitivity of such predictions, there is warranted concern regarding implicit discriminatory traits exhibited by such techniques. Such discrimination may be detrimental for certain population subgroups with a specific race, gender, ethnicity, etc, and may even be illegal under certain circumstances [angwin2016machine]. These concerns have spurred vast research in the area of algorithmic fairness [corbett2018measure, dressel2018accuracy, chouldechova2018frontiers, friedler2018comparative, zhang2018achieving, berk2017fairness, kleinberg2018discrimination, barocas2016big, chouldechova2017fair, romei2014multidisciplinary]. Most of these papers aim to establish fairness notions for a group of individuals (differentiated by their race, gender, etc.), and are classified as group fairness notions.
A possible, and less studied, cause for unfairness in predictions involve distributional changes (or drift) between the training and test datasets. Disparities can be introduced when the sub-populations evolve differently over time [barocas2017fairness]. There are important real-world scenarios where a type of distributional change, called prior probability shift, occurs. Informally, a prior probability shift occurs when the fraction of positively labeled instances differ between the training and the test datasets (see Section 2.1 for a formal definition). A concrete example is the COMPAS dataset [url:compas] which contains demographic information and criminal history of defendants, and records whether they recommitted a crime within a certain period of time (positive labels are given to the re-offenders, while others have negative labels). We observe that, among the valid records screened in the year , the fraction of Caucasian and African-American re-offenders were and , respectively. However, in , these fractions were and , respectively. This indicates that the extent of prior probability shift differs among Caucasian and African-American defendants, between the records of and .
If such distributional changes are unaccounted for, a classifier may end up being unfair towards the population subgroups which exhibit prior probability shifts. For example, if the rate of recidivism among a particular sensitive group reduces drastically, then a classifier trained with a higher rate of recidivism can create extreme unfairness towards individuals of that sub-population. In this work, we address these concerns and propose a method to obtain fair predictions under prior probability shifts.
Related Work. A large body of work defines various group fairness notions and provides algorithms to mitigate unfairness. Among these, Proportional Equality (PE) [biswas2019fairness, hunter2000proportional] appears to be the most appropriate fairness notion for addressing prior probability shifts among population subgroups (see Section 2.2 for definition). However, the existing results stop short of providing a procedure to ensure PE-fair predictions. We address this concern by proposing an end-to-end solution.
Apart from PE, there are other group fairness notions, none of which address prior probability shifts explicitly, such as Disparate Impact [feldman2015certifying, zafar2017afairness, kamiran2012data, calders2009building], Statistical Parity [corbett2017algorithmic, kamishima2012fairness, zemel2013learning],
Equalized Odds[hardt2016equality, kleinberg2017inherent, woodworth2017learning], and Disparate Mistreatment [zafar2017bfairness].
Unfortunately, all these fairness constraints are often non-convex, thereby making the optimization problem (maximizing accuracy subject to fairness constraints) difficult to solve efficiently. Several papers provide convex surrogates of the non-convex constraints [goh2016satisfying, zafar2017afairness], or finds near-optimal near-feasible solutions [cotter2018training, celis2018classification], or propose techniques to reduce dependence of group information on the predictions [kamiran2012data, kamiran2012decision, pleiss2017fairness, zhang2018mitigating]111Note that the group fairness notions require the test set to be of (statistically) significant size for fairness evaluation.. However, most of these solutions assume that the training and test datasets are identically and independently drawn from some common population distribution, and thus suffer in the presence of prior probability shifts (we provide empirical evidences in Section 4).
Our Contributions. To the best of our knowledge, we are the first to propose an end-to-end solution to ensure fair predictions in the presence of prior probability shifts.
We design a system called (Combinatorial Algorithm for Proportional Equality) in Section 3.
We introduce a metric called Prevalence Difference (PD), which attempts to minimize in order to ensure PE-fairness. We theoretically establish that the PD metric exhibits several desirable properties (Theorems 1, 2)—in particular, we show that maximizing the accuracy of any subgroup is not at odds with minimizing PD. This metric also provides insights into why the predictions of are fair (Theorem 3). We discuss these in Section 3.1 and 3.2.
We perform a thorough evaluation of on synthetic and real-world datasets, and compare with several other fair classifiers. In Section 4, we provide empirical evidence that provides PE-fair predictions, while performing well on other fairness metrics.
2 Background and Notations
In this paper, we focus on the binary classification problem, under prior probability shifts. Let be the prediction function, defined in some hypothesis space , where is the -dimensional feature space and is the label space. The goal of a classification problem is to learn the function
which minimizes a target loss function, say, misclassification error(variables and
denote feature vectors and labels). However, if these predictionsare used for societal decision making, it becomes crucial to ensure lower misclassification error not only on an average but also within each group defined by their sensitive attribute values such as race, gender, ethnicity, etc. Dropping these sensitive attributes blindly from the dataset may not be enough to alleviate discrimination since some non-sensitive features can be closely correlated to the sensitive attributes [zliobaite2016using, corbett2018measure, hardt2016equality]. Hence, most existing solutions assume access to the sensitive attributes. In the presence of such a sensitive attribute with sub-populations, the goal is to learn satisfying certain group-fairness criteria (where denotes the set ). We use variable to denote group membership (one can encode multiple sensitive attributes into ). We assume that the training dataset
is drawn from an unknown joint distributionover . The performance of the classifier is measured using a new set of data, referred as test dataset , by observing how accurate and fair the s are with respect to the true labels s.
Next, we focus on an important phenomenon called prior probability shift, which may cause a learned classifier to be unfair in its predictions on a test dataset.
2.1 Prior Probability Shift
Prior probability shift [saerens2002adjusting, moreno2012unifying, kull2014patterns] occurs when the prior class-probability changes between the training and test sets, but the class conditional probability remains unaltered. Such changes, within a sub-population, occur in many real-world scenarios, that is, == remains constant but == changes between the training and test datasets. If left unaccounted for, it may lead to unfair predictions [barocas2017fairness].
2.2 Proportional Equality
To address the fairness concern under prior probability shifts, a notion called proportional equality (PE) was formalized in [biswas2019fairness]. A classifier is said to be PE-fair if it has low values for the following expression:
True prevalence is the fraction of population, from the group , labeled positive in the dataset .
Prediction prevalence is the fraction of population, from the group , predicted positive by the classifier for .
However, Biswas and Mukherjee biswas2019fairness do not provide any algorithm for ensuring PE-fair predictions. Any such algorithm must deal with the following key challenges:
PE (for a small ) is a non-convex constraint. Thus, it is hard to directly optimize for accuracy subject to this constraint for all .
The definition of PE uses true prevalences of the test datasets , which are unavailable to the classifier during the prediction phase. Thus, an algorithm needs to estimate these prevalences. Techniques from the quantification literature can be leveraged to solve this concern, which we describe next.
2.3 Quantification Problem
Quantification learning (or prevalence estimation
) is a supervised learning problem, introduced by Forman forman2005counting. It aims to predict an aggregated quantity for a set of instances. The goal is to learn a function, calledquantifier , that outputs an estimate of the true prevalence of a finite, non-empty and unlabeled test set . As highlighted by Forman, quantification is not a by-product of classification [gonzalez2017quantification]. In fact, unlike assumptions made in classification, quantification techniques account for changes in prior probabilities within subgroups, while assuming remain the same over the training and test datasets. This allows quantifiers to perform better than naïve classify and count techniques, as demonstrated by Forman forman2006quantifying.
Some commonly used algorithms to construct quantifiers are Adjusted Classify and Count (ACC) [forman2006quantifying], Scaled Probability Average (SPA) [bella2010quantification], and HDy [gonzalez2013class]. These algorithms can be used to estimate the prevalence of a group in the test set.
For ease of exposition, we describe a simple quantification technique, ACC. This method learns a binary classifier from the training set and estimates its true positive rates () and false positive rates () via -fold cross-validation. Using this trained model, the algorithm counts the number of cases on which the classifier outputs positive on the test set. Finally, the true fraction of positives (true prevalence) is estimated via the equation
where denotes the fraction of predicted positives, . The use of and from the training set can be justified by the assumption that remains same in the training and test datasets. This simple algorithm turns out to provide good estimates of prevalences under prior probability shifts. However, for our experiments, we use SPA [bella2010quantification], which uses a probability estimator instead of a classifier, and turns out to be more robust to variations while estimating probabilities of a dataset with a few samples.
Next, we discuss , which provides a comprehensive solution to the above problems by combining quantification techniques along with training an ensemble of classifiers.
In this section, we introduce (Combinatorial Algorithm for Proportional Equality), for ensuring PE-fair predictions. takes as input a training dataset and a vector of desired prevalences . is separately trained for each group , since we hypothesize that the relationship between the non-sensitive features and the outcome variable may differ across groups. Thus, each group would be best served by training classifiers on datasets obtained from the corresponding group222Training a separate classifier for a small-sized subgroup may be inappropriate. For the datasets we consider, this issue never arises.. Such decoupled classifiers are also considered by Dwork et al. dwork2017decoupled, but they do not handle prior probability shifts.
The training phase outputs, for each group , the following:
a set of classifiers, each trained using a sampling of the training dataset obtained by the module , which takes as input a prevalence parameter and a training set with data points. It randomly selects, with replacement, instances with = and instances with =. Thus, it outputs a sample of size . Each classifier is thus specialized in providing accurate predictions on datasets with particular prevalences.
a quantifier , generated by the module, which is subsequently used in the prediction phase of to estimate the true prevalence of the test dataset, . Separate quantifiers are created for each group since the extent of prior probability shifts may differ across groups.
During the prediction phase, for each group , an estimate of the prevalence of the test data is obtained using (learned in the training phase). This estimate is then used to choose the classifier that minimizes the prevalence difference metric (Section 3.1). Finally, outputs the predictions of the classifier on the test set .
Note that provides the flexibility to plug in any classification and quantification algorithm into modules and . Key to is the prevalence difference metric, used in Step of the prediction phase. We formalize the metric and discuss some of its properties in the next section.
3.1 Prevalence Difference Metric
We define the prevalence difference (PD) metric, for each group , as: , where, and denote the true and predicted prevalences of the dataset (as defined in Equations 1 and 2, respectively). Hereafter, we drop the subscripts and superscripts on , and whenever we refer to the population in aggregate.
Note that the true prevalence of test set cannot be used during the prediction phase. Thus, replacing with in the definition of provides a measure to choose the best classifier for the group . Also, unlike PD, other performance metrics like accuracy, FPR or FNR are not suitable for choosing the best classifier since these metrics require the true labels of the test datasets. We use the PD metric for: (1) choosing the best classifier in the prediction phase and (2) measuring the performance of the predictions, since a high value of implies the inability to account for prior probability shift for the group .
The PD metric is somewhat different from the fairness metrics aiming to capture parity between two sub-populations. Such fairness metrics may often require sacrificing the performance on one group to maintain parity with the other group. However PD, in itself, believes that the two groups should be treated differently since each group may have gone through a different change of prior probabilities. A high indicates high extent of harm caused by the predictions made towards to the group . Thus, to audit the impact of a classifier’s predictions on a group , it is important to evaluate for , along with accuracy, FNR and FPR values within each group.
Next, we show that a perfect classifier ( accurate) attains zero prevalence difference. Additionally, we show that a classifier with high accuracy on any sub-group also attains a very low for that subgroup. Empirically, we observe that low results in PE-fair predictions.
3.2 Theoretical Guarantees
We first show a simple result— a classifier whose predictions are exactly the ground truth also attains , thereby satisfying our proposed metric used for selecting the best classifier. Note that a perfect classifier may not satisfy fairness notions such as disparate impact and statistical parity.
A perfect classifier always exhibits .
Let us consider a perfect classifier whose predictions are equal to the ground truth i.e., for all instances , where is the label predicted by the classifier for the instance . Thus, for each , the true prevalence is equal to the prediction prevalence , according to the definitions in Equations 1 and 2. Thus, the prevalence difference . ∎
If the overall accuracy of a classifier is , where is a very small number, then the overall prevalence difference for is , where and denote number of false negatives and false positives respectively in the test dataset with instances. This further implies that .
Let denote the predictions of a classifier on a test dataset . Some other notations that we use for the proof, are:
( true positives).
( true negatives).
( false positives).
( false negatives).
Note that . Let and be the true and prediction prevalences. Then, the prevalence difference can be written as:
Let the accuracy of a classifier on a test dataset be () where . Then,
Without loss of generality, let us assume . Thus, Equation 4 can be written as:
Similarly, assuming we obtain
If accuracy of a classifier for any sub-population is greater than , then .
The following theorem gives insight on why works. In the subsequent discussion, we drop the parameter from the notations and and since we exclusively refer to these values in the context of the test dataset only.
Let where and . For a group , and test dataset , let the quantifier be such that , and the classifiers be such that for all , for small and . Then, for the best classifier
the following holds:
For the best classifier , the prevalence difference of a group can be upper bounded using triangle inequality:
Inequality (8) is implied by the assumption on the quantifier’s performance, i.e., . To provide an upper bound for , we pick such that
Since , it is at most away from one of the fractional values in . Therefore,
We use Inequality (9) to provide an upper bound to the expression , using case-by-case analysis.
Case : Assume . This leaves us with three possibilities for the value of :
Assume . Then,
Assume . Now, we bound the desired quantity using the value of . Note that since is the best classifier. Thus, either or .
Assume . Then,
Assume . Then,
Assume . Now, we bound the desired quantity using the value of , and there can be three cases.
Assume . Then,
Assume . Then,
Assume . Then,
Inequalities - establish the following upper bound when ,
4 Experimental Evaluation
We first evaluate on synthetically generated datasets. We then compare it with other fair classifiers on the real-world COMPAS [url:compas] and MEPS [url:meps] datasets, where we observe possible prior-probability shifts.
is open source but the link is retracted for anonymity. The performance ofon a wide range of fairness-metrics, across all these datasets, enforces our proposal that should be used for predictions under prior-probability shifts.
Synthetic: We assume a generative model with features—sensitive attribute , and two additional attributes and —along with the label . We assume that the overall population distribution is generated as . We further consider equal representation of the two population subgroups, i.e., == for each . and are conditionally independent: , and the distributions are considered to be Gaussian () with the following parameters: =, =, =, and =.
We generate instances for the training dataset with equal label distribution, i.e., . However, while generating the test set, the prevalence parameters are different. We generated different types of test datasets, each obtained by varying the prevalences for both subgroups , such that .
COMPAS dataset contains demographic information and criminal history for pre-trial defendants in Broward County, Florida. The goal of learning is to predict whether an individual re-offends. We consider as labels and as the sensitive attribute (= denotes African-Americans, while = denotes Caucasians). We pre-processed the dataset to remove rows containing missing or invalid information. Our training dataset comprises records whose screening dates were in the year (of which are African-Americans), while the test dataset comprises records screened in the year (of which are African-Americans).
iterations and the standard deviation is of the order.
MEPS comprises surveys carried out on individuals, health care professions, and employers in the United States. The feature measures the total number of trips involved in availing some sort of medical facility. The classification task involves predicting whether . We consider as the sensitive attribute (= denotes ‘Non-Whites’). The surveys for the year is our training set (with data points, of which are ‘Non-Whites’), and the surveys for is our test set (with data points, of which are ‘Non-Whites’).
4.2 Other Algorithms for Comparison
We compare against an accuracy-maximizing classifier, Max_Acc. It uses the same algorithm used by in the module . On the real-world datasets, we additionally compare with the following (in-, pre- and post-processing) fair algorithms, implemented in the IBM AI Fairness 360 [aif360-oct-2018] toolkit—Reweighing (Reweigh) [kamiran2012data], Adversarial Debiasing (AD) [zhang2018mitigating], and variants of Meta_fair [celis2018classification], Calibrated Equalized Odds Postprocessing (CEOP) [pleiss2017fairness], Reject Option Classification (ROC) [kamiran2012decision]. These algorithms target fairness notions other than PE. We evaluate the extent to which these algorithms achieve PE fairness and compare how they perform on a set of other metrics (such as FPR-diff, FNR-diff, Accuracy-diff, and PD). While can handle multiple sensitive attributes, we choose one sensitive attribute for all the datasets to stay consistent with the implementation in the IBM AIF360 toolkit.
4.3 Parameters and Modules used for
Prevalences: We set .
: As described in Section 3.
: Scaled Probability Average [bella2010quantification].
Synthetic dataset: We evaluated with types of test datasets, each with for . The general trend we observe is that outperforms whenever there is a significant shift in prior probabilities. We report two interesting sets of results here.
First, we consider test datasets with , and ranging between and . Figure 4 summarizes our findings. Since accounts for prevalence changes, the accuracy of on for group = (Figure 1) is consistently higher than , except for the dataset with where the accuracies become nearly equal. The prevalence difference for = (Figure 2) is lower for whenever there is a prior probability shift (i.e., when ). In fact, for , remains across all the test datasets. Thus, for increases linearly as moves away from . Lastly, the predictions of consistently exhibit a lower valuation for PE (Figure 3), compared to . This highlights that the predictions of are more fair, compared to the purely accuracy maximizing .
Second, in Table 1, we report results for scenarios where both and significantly deviate from their corresponding prevalences in the training set. The results are representative of the general trend we observed in the other test datasets— outperforms on accuracy, PD and PE metrics.
Real-world datasets: For COMPAS, columns and of Table 2 highlight that the true prevalences of the training (year ) and test (year ) datasets are significantly different. This is indicative of a possible prior probability shift. Column shows that the module of makes a good estimate of the true prevalences of the test dataset.
|Algorithms||Z = 0||Z = 1||diff||Z = 0||Z = 1||diff||Z = 0||Z = 1||diff||Z = 0||Z = 1||Z = 0||Z = 1||PE|
For MEPS, we observe a shift only for the group =, between the training set (surveys in the year ) and test set (surveys in ). Since the differences in prevalences are rather small, this dataset is of interest—it allows us to investigate the performance of when the extent of prior probability shift is small. Though the prevalences estimated by seem similar to the training set, the difference in the estimates of and the prevalences of the test datasets are only and , for = and = respectively, and are thus good estimates.
Table 3 summarizes the results on COMPAS and MEPS datasets for , , and the other fair algorithms described in Section 4.2. Due to lack of space, we elaborate upon the results of the COMPAS dataset only.
- considers the whole test dataset during prediction, while - considers individual instances during prediction (similar to what the other algorithms do). We expect - to perform better than - since the module is expected to perform better for larger test datasets.
- outperforms on , and all the other fairness metrics (FPR-diff, FNR-diff, Accuracy-diff, and PE). The prediction prevalences of ( and ) are close to the true prevalences of the training set ( and ), which highlights the inability of to account for the prior probability shift. One critical observation about - is that FPR-diff= and FNR-diff= which implies that the predictions exhibit equalized odds. In comparison, these differences for are and . In fact, for , is almost twice than , whereas is almost half of . This implies that imposes unfair higher risks of recidivism on African-American defendants, while Caucasian defendants are predicted to have lower risks than they actually do.
The true prevalences of the two subgroups in the test dataset are close to each other (namely, and ). Thus, a classifier aiming to achieve statistical parity is expected to do well on PE. We observe this in Table 3, where -spd (statistical parity difference) has lowest PE (). However, its false positive rates are more than for both subgroups, which is unfair and harmful for both subgroups. This unfairness is also captured by the high value of -spd. We observe that is the lowest for - among all other classifiers. For , -fdr (false discovery rate with the group fairness trade-off parameter set to ), is the only other fair classifier with a lower value. However, the predictions of -fdr have high false positive rates, and low accuracies.
Note that a trivial classifier, which always predicts positive labels, will have FNR-diff=, FPR-diff=, Accuracy-diff=. However, this classifier will have high PD for both groups (= and =
), which indicates a substantial skew between the false positives and false negatives. Thus, PD is an important metric that, in addition to accuracy, captures the learning ability of the classifiers.
We make a final observation on our experimental results. Since both COMPAS and MEPS are real-world datasets, the distributional changes highlighted in Table 2 may not be due to prior probability shifts alone. Although is designed to handle only prior probability shifts, the good performance of both - and - on a wide range of metrics for these real-world datasets shows the robustness of our approach.
A possible extension of includes handling other distributional changes, such as concept drifts, that is, when changes but remains same.
Arpita Biswas gratefully acknowledges the support of a Google PhD Fellowship Award.