To reduce human error and prejudice, many high-stakes decisions have been turned over to machine algorithms. However, recent research suggests that this does not remove discrimination, and can perpetuate harmful stereotypes. While algorithms have been developed to improve fairness, they typically face at least one of three shortcomings: they are not interpretable, they lose significant accuracy compared to unbiased equivalents, or they are not transferable across models. To address these issues, we propose a geometric method that removes correlations between data and any number of protected variables. Further, we can control the strength of debiasing through an adjustable parameter to address the trade-off between model accuracy and fairness. The resulting features are interpretable and can be used with many popular models, such as linear regression, random forest and multilayer perceptrons. The resulting predictions are found to be more accurate and fair than several comparable fair AI algorithms across a variety of benchmark datasets. Our work shows that debiasing data is a simple and effective solution toward improving fairness.READ FULL TEXT VIEW PDF
Machine learning (ML) models sift through mountains of data to make decisions on matters big and small: e.g., who should be shown a product recommendation, hired for a job, or given a home loan. Machine inference can systematize decision processes to take into account orders of magnitude more information, produce accurate decisions, and avoid the common pitfalls of human judgment, such as prejudice and blind spots. Moreover, unlike people, machines will never make poor decisions when tired [Danziger, Levav, and Avnaim-Pesso2011], pressed for time or distracted by other matters [Shah, Mullainathan, and Shafir2012, Mani et al.2013].
Recent research suggests, however, that discrimination remains pervasive, even in ML models [Angwin et al.2016, Chouldechova2017, Dressel and Farid2018, O’neil2016]. For example, a model used to evaluate criminal defendants for recidivism assigned higher risk scores to African Americans than to Caucasians [Angwin et al.2016]. As a result, reformed African American defendants, who would never commit another crime, were deemed by the model to present a higher risk to society—as much as twice as high [Angwin et al.2016, Dressel and Farid2018]—as reformed white defendants, with potentially grave consequences on how they were treated by the justice system.
Emerging field of AI fairness has produced approaches for mitigating harmful model biases [Dwork et al.2012, Chouldechova2017, Chouldechova and Roth2018], such as penalizing unfair inferences for particular models [Dwork et al.2012, Berk et al.2017], or creating representations that do not strongly depend on protected features [Jaiswal et al.2018, Moyer et al.2018, Locatello et al.2019]. These methods, however, lack least one of three critical attributes: interpretability, accuracy, or generalizability. Interpretability is necessary for understanding social factors and individual features contributing to discrimination and bias, as well as improving transparency and accountability of AI systems. In contrast to black box models, fair models need to be able to explain their decisions. In terms of performance, although models must sacrifice accuracy to fairness [Pierson et al.2017], the trade-off need not be as dramatic as what current methods achieve. The issue of generalizability stems from the specialization of current methods to specific ML models. These methods cannot easily generalize to other models. For example, while [Zafar et al.2017, Kamiran, Calders, and Pechenizkiy2010, Berk et al.2017]
all create different methods for fair ML, each method is specialized to regressions, support vector machines (SVM), or random forests. Similarly, while previous methods create fair latent features for neural networks (NN)[Jaiswal et al.2018, Moyer et al.2018], the methods cannot be easily applied to improve fairness in non-NN models. These fair AI algorithms were not meant to be generalizable, because there do not seem to be adequate meta-algorithms that debias a whole host of ML models. One might naively expect that we can just create a single fair model and apply it to all datasets. The problem is that model performance varies greatly on different datasets. While NNs are critical for, e.g., image recognition [Ciregan, Meier, and Schmidhuber2012], other methods perform better for small data [Olson, Wyner, and Berk2018], especially when the number of dimensions is high and the sample size low [Liu, Wei, and amd Qiang Yang2017]. There is no one-size-fits-all model and there is no one-size-fits-all debiasing method. Is there an easier way to create fairer predictions other than specialized methods for specialized ML models? Chen et al. offer some clues to addressing this fundamental issue in fair AI [Chen, Johansson, and Sontag2018], namely that by addressing data biases, we can potentially improve fair AI across the spectrum of models, and achieve fairness without sacrificing greatly prediction quality.
Following the ideas of Chen et al., we therefore develop a geometric method for debiasing features
. Depending on the hyperparameter we choose, these features are mathematically guarentees to be uncorrelated with specified sensitive, orprotected, features. This method is exceedingly fast and the debiased features are highly correlated with the original features (average Pearson correlations are between 0.993–0.994 across the three datasets studied in this paper). These features are as interpretable as the original features when applied to any model. When applied to linear regression, for example, the coefficients are the same or similar to the coefficients of the original features when controlling for protected variables (see Methods). These debiased features serve as a fair representation of data that can be used with a number of ML models, such as linear regression, random forest, SVMs, and NNs. Due to the small size of the benchmark data, we do not use our features to train NNs in this paper, because NNs could easily overfit the data. While previous methods have created fair representations [Olfat and Aswani2018, Samadi et al.2018, Jaiswal et al.2018, Moyer et al.2018], these methods create representations that are either not interpretable, like PCA components, or the relationship between these fair representations and the original features have not been established. We evaluate the proposed approach on a number of now-benchmark datasets. We show that models using these debiased features are more accurate for almost any level of fairness we desire.
In the rest of the paper, we first review recent advances in fair AI to highlight the novelty of our method. Next, we describe in the Methods section our methodology to improve data fairness, and the definitions of fairness we use in the paper. In Results, we describe how our method improves fairness in both synthetic data and empirical benchmark data. We compare to several competing methods and demonstrate the advantages of our method. Finally, we summarize our results and discuss future work in the Conclusion section.
Social scientists use linear regression for data analysis due to its simplicity and interpretability. Interpretability comes from regression coefficients, which specify how the outcome, or response, changes when features change by one unit. However, regression creates unfair outcomes, even when protected features are excluded from the model, because other features may be correlated with them.
To make regression models fair, researchers introduced a loss function to penalize regression for unfair outcomes[Berk et al.2017]. Similarly, [Zafar et al.2015]
created fair logistic regression by introducing fairness constraints that limit the covariance between protected features and the outcome. An alternate method achieved fairness by constraining false positive or false negative rates[Zafar et al.2016]. There are some issues in these works. First, protected features are not included in the logistic model with fairness constraints. While this improves privacy, it forces the parameters of logistic models to take certain combinations which will minimize the correlation with the protected features. This can reduce the accuracy when the constraints are strict. The issue for the second method is mainly numeric. The algorithm requires an optimization of a convex loss function on a non-convex parameter space. While these models are generally interpretable, the approaches do not transfer to other models. Their accuracy also often suffers in comparison to neural methods.
Researchers have explored a variety of methods for learning fair representations of data [Jaiswal et al.2018, Moyer et al.2018, Louizos et al.2015, Xie et al.2017, Zemel et al.2013, Samadi et al.2018, Olfat and Aswani2018]. Some of those works use NNs to embed raw features in a lower-dimensional space, such that the embedding will contain the information about the outcome variable, but at the same time, contain little information about the protected feature. Fair logistic models or fair scoring, on the other hand, can be regarded as a one dimensional embedding of data, which makes sure that the predictions, , are independent of the protected features. They are mainly used with NNs, which while being highly accurate, often lack interpretability. Two methods were instead developed to improve fairness of PCA features [Samadi et al.2018, Olfat and Aswani2018], but, while they can be applied to non-NN ML models, they lack interpretability compared to the original features.
Johndrow2017 (Johndrow2017) proposed an algorithm which removes sensitive information about protected groups based on inverse transform sampling. The algorithm transforms individual features such that the transformed features satisfy the marginal distribution. Although this method can guarantee that predictions are fair in a probabilistic sense, it has a critical disadvantage — as the number of protected features increases, the number of protected groups increases as
. This means that in order to properly estimate conditional and marginal distribution of features, one needs exponentially increasing population size. Our method overcomes these difficulties by using linear algebra as the basis for learning unbiased representations. This allows our algorithm to only taketime to debias data. Moreover, our method is a white box: it is interpretable and can be fully scrutinized, unlike a black box method.
We describe a geometric method for constructing fair interpretable representations. These representations can be used with a variety of ML methods to create fairer and accurate models of data.
We consider tabular data with entries and features. The features are vectors in the -dimensional space, denoted as where , and one of the columns corresponds to the outcome, or target variable . Among the features, there are also protected features, . As a pre-processing step, all features are centered around the mean: .
We describe a procedure to debias the data so as to create linearly fair features. We aim to construct a representation of a feature , that is uncorrelated with protected columns , but highly correlated to feature . We recall that Pearson correlation between the representation and any feature is defined as
where is the expectation, and and Because all the features are centered (and we also assume that is centered), , we have
Zero correlations between and protected columns requires that lives in the solution space of . Maximizing correlations between and under this constraint is equivalent to projecting into the solution space of .
To calculate , we can first create an orthonormal basis of vectors , which we can label as . We then construct a projector . The representation is given as
Using the Gram–Schmidt process, the orthonormal basis can be constructed intime and for every fair representation of features, the projection takes time. Given features, the total time of the algorithm is Therefore our method scales linearly with respect to the size of the data and the number of features. In practice, this is exceedingly fast. For example, this algorithm takes only 90 milliseconds to run on the Adult dataset described below, which has 20K rows and over 100 features.
While the previous discussion was on how to create linearly fair features, one can make linearly fair outcome variables, through the same process. In prediction tasks, however, we do not have access to the outcome data. While our method does not guarantee that every model’s estimate of the outcome variable, is fair, we find that it can significantly improve the fairness compared to competing methods. Moreover, in the special case of linear regression, it can be shown that the resulting estimate, , is uncorrelated with the protected variables.
Inevitably, the accuracy of a model using such linearly fair features will drop compared to using the original features, because the solution is more constrained. To address this issue, we introduce a parameter , which indicates the fairness level. We define the parameterized latent variable as
Here, corresponds to , which is strictly orthogonal to the protected features ; while gives .
The protected features can be both real valued and cardinal. The fair representation method can also handle categorical protected features by introducing dummy variables. Specifically, if a variablehas categories , we can can convert them to binary variables where the variable is 1 if the variable is category , and otherwise 0. If all variables are 0, then the category is . As a simple example, if a feature has 3 categories, , , and , then the dummy variables would be and . If , the category is , if , then the category is , and otherwise is . The condition of fairness in this case is interpreted as same mean value of the latent variables in different categorical groups.
Using the procedure described above, we can construct a fair representation of every feature, and use the fair features to model the outcome variable. Consider a linear regression model that includes all features: protected features and non-protected features features .
After transforming the features to fair features , the fair regression model reduces to:
We can prove that , but the predicted value is uncorrelated with protected features . In general linear regression, such as logistic regression, this proof does not hold, but we numerically find that coefficients are similar.
We should take a step back at this point. The fair latent features are close approximations of the original features, therefore we expect that, and in certain cases can prove, that the regression coefficients of the fair features should be approximately the coefficients of the original features. The fair features can, by this definition, be considered almost as interpretable as the original features.
While there exists no consensus for measuring fairness, researchers have proposed a variety of metrics, some focusing on representations and some on the predicted outcomes [Verma and Rubin2018, Hutchinson and Mitchell2019]. We will therefore compare our method to competing methods using the following metrics: Pearson correlation, mutual information, discrimination, calibration, balance of classes, and accuracy of the inferred protected features. Due to space limitations, we leave mutual information out of our analysis in this paper, and do not compare calibration and balance of classes to model accuracy. Results in all cases are similar.
One can argue that outcomes are fair if they do not depend on the protected features. If this is the case, a malicious adversary won’t be able to guess the protected features from the model’s predictions. One way to quantify the dependence is through Pearson correlation between (real valued or cardinal) predictions and protected features. For models making binary predictions, fairness can be measured using the mutual information between predictions and the protected features, given that protected features are discrete. We find mutual information and Pearson correlations create similar findings, therefore we focus on Pearson correlations in this paper. Previous work [Zemel et al.2013] has also defined a discrimination metric for binary predictions as below. Consider a protected variable , a binary prediction of an outcome . The metric measures the bias of a binary prediction with respect to a single binary protected feature using the difference of positive rates between the two groups.
For real valued predictions (), Kleinberg2016 (Kleinberg2016) suggested a more nuanced way to measure fairness:
Calibration within groups
: Individuals assigned predicted probability, should have an approximate positive rate of . This should hold for both protected groups ( and ).
Balance for the negative class: The mean of group and group should be the same.
Balance for the positive class: The mean of group and group should be the same.
In some cases, calibration error is difficult to calculate, as it depends on how predictions are binned. In these cases, we can measure calibration error using log-likelihood of the labels given the real valued predictions as a proxy. By definition, logistic regression maximizes the (log-)likelihood function, assuming the observations are sampled from independent Bernoulli distributions where. Better log-likelihood implies that the individuals assigned probabilities are more likely to have a positive rate , which is better calibrated according to Kleinberg2016.
Several past studies examined the fairness of representations, arguing that models using fair representations will also make fair predictions. Learned representations are considered fair if they do not reveal any information about the protected features [Jaiswal et al.2018, Moyer et al.2018, Louizos et al.2015, Xie et al.2017, Verma and Rubin2018]. The studies trained a discriminator to predict protected features from the learned representations—using accuracy as a measure of fairness.
Following this approach, we treat the predicted probabilities as a 1-dimensional representation of data and use the accuracy of the inferred protected features as a measure of fairness. However, this method is not effective in situations where the protected classes are unbalanced. Let us assume the fair representation is and the protected feature is . For simplicity, we only consider the case of a single binary protected feature. The discriminator infers the protected feature in a Bayesian way, namely,
In the case where there is a large difference between and , even if there is useful information in the distribution
, the discriminator will not perform significantly better than the baseline model, the majority class classifier.
In this section, we demonstrate how our method can achieve fair classification using synthetic data, and then compare our prediction accuracy and fairness to other fair AI algorithms using benchmark datasets.
We create synthetic biased data using the procedure described in [Zafar et al.2016]. We generate data with one binary protected variable , one binary outcome , and two continuous features, and
, which are bivariate Gaussian distributed within each value of. In the Fig. 1, we use color to represent protected feature values (red, blue) and outcome using symbol (,
). The first observation is that there is an imbalance in the joint distribution of the protected features and the outcome variable. For blue color markers, there are more blues than blue s. We expect that a logistic classifier trained on this data will show similar unbalanced behavior. To demonstrate our method, we choose two different fairness levels, . We first transform the two features into their corresponding fair representations and then we train logistic classifiers using these fair representations. In Fig. 1, we plot the data using the fair representations and we show the classification boundary using a green dashed line. We can observe that for , the blue markers and red markers are mixed (less discrimination and bias), but for (equivalent to raw data), the blue and red markers tend to separate from each other. We can estimate this imbalance by comparing the ratio of blue in individuals predicted as and the ratio of blue in individuals predicted as . The larger the difference, the more the imbalance. Quantitatively, for , there are 62.7% blue in o-predictions and 52.9% in x-predictions. For , those ratios are 76.2% and 36.5%. The accuracy of outcome predictions are 0.811 and 0.870 for the fair and original features, respectively, thus demonstrating that, while increasing fairness does indeed sacrifice in accuracy, the loss can be relatively small. Overall, the results suggest that biased data creates biased models, but our method can make fairer models.
We compare our fair logistic model to state-of-the-art methods on three real-world datasets, which have become benchmarks for fair AI.
German dataset has 61 features about 1,000 individuals, with a binary outcome variable denoting whether an individual has a good credit score or not. The protected feature is gender. (https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data))
COMPAS dataset contains data about 6,172 defendants. The binary outcome variable denotes whether the defendant will recidivate (commit a crime) within two years. The protected feature is race (whether the race is African American or not), and there are nine features in total. (https://github.com/propublica/compas-analysis)
Adult dataset contains data about 45,222 individuals. The outcome variable is binary, denoting whether an individual has more than $50,000. The protected feature is age, and there are 104 features in total. (https://archive.ics.uci.edu/ml/datasets/Adult)
Debiased features had mean correlations of 0.993 (90% quantiles: 0.954–0.999), 0.994 (90% quantiles: 0.980–0.999), and 0.994 (90% quantiles: 0.948–1.000), for the German, COMPAS, and Adult data, respectively. We reservedof the data in the Adult and COMPAS datasets for testing and used the remaining data to perform 5-fold cross validation. This ensured no leakage of information from the training set to the testing set. The German dataset is much smaller than the rest, so it was randomly divided into five folds of training, validation and testing sets. Each set had 50%, 20% and 30% of all the data. We measured the performance metrics on the test data.
We varied the fairness parameter between 0 and 1 and applied the debiased features to logistic regression, AdaBoost, NuSVM, random forest, and multilayer perceptrons. In practice, one could use a host of commercial ML models and pick the most accurate one given their fairness tolerance.
We compared our method to several previous fair AI algorithms. For the models proposed by [Zafar et al.2015, Zafar et al.2016], we vary the fairness constraints from perfect fairness to unconstrained. For the “Unified Adversarial Invariance” (UAI) model proposed by [Jaiswal et al.2018], we vary the term in the loss function from 0 (no fairness) to very large value, e.g., for COMPAS dataset, (large value corresponds to perfect fairness). The predictions of the UAI model for the German and Adult dataset are provided by the authors. We are interested in (1) how different models trade off between accuracy and fairness and (2) how different metrics of fairness compare to each other.
We first investigate the tradeoffs between prediction accuracy (
) and fairness, which we measure three different ways: (1) Pearson correlation between the protected feature and model predictions, (2) discrimination between the binary protected feature and the binarized predictions (predicted probabilities above 1/2 are given a value of 1, and are otherwise 0) and (3) the accuracy of predicting protected features from the predictions (). To robustly predict the protected features from the model predictions, we used both a NN with three hidden layers, which is used by former works [Jaiswal et al.2018, Moyer et al.2018, Louizos et al.2015, Xie et al.2017, Zemel et al.2013] and a random forest model. We report the better accuracy of those two models. Figure 2,3 and 4 shows the resulting comparisons.
The figures show that models using the proposed fair features achieve significantly higher accuracy—for the same degree of fairness—compared to competing methods. In Fig. 4, we find Acc P shows little difference from the baseline majority class classifier for the German and Adult datasets. The reason is explained in Eq.(6). On the other hand, Acc P of COMPAS dataset shows a clear trend because the majority baseline is around 0.51, which is consistent with the Eq.(6). For the Adult dataset, the fair logistic regression cannot achieve perfect fairness but the situation is improved by AdaBoost. We discover, in other words, that there is no single ML model that achieves greater accuracy for a given value of fairness, but our method allows us to choose suitable models to achieve greater accuracy.
|Method||Acc Y||Acc P||Acc Y||Acc P|
|Li Li2014 *||0.74||0.80||0.76||0.67|
|VFAE Louizos2015 *||0.73||0.70||0.81||0.67|
|Xie Xie2017 *||0.74||0.80||0.84||0.67|
|Moyer Moyer2018 *||0.74||0.60||0.79||0.69|
|Jaiswal Jaiswal2018 *||0.78||0.80||0.84||0.67|
We further compared our method with previous works on fair representations. As mentioned before, some previous works use NNs to encode the features into a high dimensional embedding space and then use separately trained discriminators to infer the protected feature and the outcome variable. The accuracy of inferring protected feature and outcome are reported. Ideally, the accuracy for the outcome should be high and the accuracy of inferring the protected features should be close to the majority class baseline. We set the fairness level to , namely perfect fairness when comparing to previous works. We show Acc P and Acc Y for various methods in Table 1 and Fig. 4. Our method applied to a logistic model has similar fairness to the best existing methods but is very fast, easy to understand, and creates more interpretable features.
Finally, we use another measure of fairness that captures the degree to which each model makes mistakes. Figure 5 shows delta score (i.e., balance) versus negative log-likelihood (i.e., calibration error). Fairer predictions are located in the lower left hand corner of each figure, meaning that there are fewer differences in outcomes for the different classes. We only compare the logistic model with fair features to the models proposed by Zafar et al. [Zafar et al.2015, Zafar et al.2016], because these models maximize the log-likelihood function (minimize calibration error) when selecting parameters. For all datasets, our method generally achieves greater fairness.
We show that our algorithm simultaneously achieves three advances over many previous fair AI algorithms. First, it is interpretable; the features we construct are minimally affected by our fair transform. While this does not mean the models trained on these features are interpretable (they could be a black box), it does mean that any method used to interpret features could easily be used for these fairer features as well. Next, the features preserve model accuracy. Namely, models using these features were more accurate than competing methods when the value of the fairness metric was held fixed. This is in part due to the third principle: that our method can be applied to any number of commercial models; it merely acts as a pre-processing step. Different models have different strengths and weaknesses; while some are more accurate, others are fairer. We can pick and choose particular models that achieve both high fairness and accuracy, whether it is a linear model like logistic regression or a non-linear model like a multilayer perceptron, as shown in Figs. 2, 3, & 4.
Importantly, we only remove linear correlations between each feature and the protected features. While this works very well in practice, and beats state-of-the-art models, the fairness could be improved by removing non-linear correlations. Second, we can extend our method to more easily address categorical protected variables. In the present method, a categorical variable with alphabet sizebecomes a set of bivariate variables, It would be ideal, however, if a method reduced the mutual information between the categorical variable directly, rather than first creating variables, and removed correlations.
Authors would like to thank Ayush Jaiswal for providing the code for learning adversarial models and feedback on results. Authors also thank Daniel Moyer and Greg Ver Steeg for insightful discussions about the approach. This material is based upon work supported in part by the Defense Advanced Research Projects Agency (DARPA) ) under Contracts No. W911NF-18-C-0011. This research is also based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2017-17071900005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
Here are the hyperparameters used for modeling the empirical datasets. All models were trained using the sklearn library in Python 3.
|German||Adaboost||Logit model, , penalty, 100 estimators (max)|
|MLP||64-node, 3-layer network,|
|Random Forest||100 trees, max depth|
, Radial basis function kernel
|COMPAS||Adaboost||Logit model, , penalty, 100 estimators (max)|
|MLP||64-node, 3-layer network,|
|Random Forest||100 trees, max depth|
|NuSVM||, Radial basis function kernel|
|UAI||predictor loss weight,|
|decoder loss weight,|
|disentangler loss weight
|Adult||Adaboost||Logit model, , penalty, 100 estimators (max)|
|MLP||64-node, 3-layer network,|
|Random Forest||100 trees, max depth|
|NuSVM||, Radial basis function kernel|
Discrimination Aware Decision Tree Learning.In 2010 IEEE International Conference on Data Mining, 869–874.
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), 2287–2293.
The Variational Fair Autoencoder.1–11.
Convex Formulations for Fair Principal Component Analysis.