Fairness through Equality of Effort

11/11/2019
by   Wen Huang, et al.
University of Arkansas
0

Fair machine learning is receiving an increasing attention in machine learning fields. Researchers in fair learning have developed correlation or association-based measures such as demographic disparity, mistreatment disparity, calibration, causal-based measures such as total effect, direct and indirect discrimination, and counterfactual fairness, and fairness notions such as equality of opportunity and equal odds that consider both decisions in the training data and decisions made by predictive models. In this paper, we develop a new causal-based fairness notation, called equality of effort. Different from existing fairness notions which mainly focus on discovering the disparity of decisions between two groups of individuals, the proposed equality of effort notation helps answer questions like to what extend a legitimate variable should change to make a particular individual achieve a certain outcome level and addresses the concerns whether the efforts made to achieve the same outcome level for individuals from the protected group and that from the unprotected group are different. We develop algorithms for determining whether an individual or a group of individuals is discriminated in terms of equality of effort. We also develop an optimization-based method for removing discriminatory effects from the data if discrimination is detected. We conduct empirical evaluations to compare the equality of effort and existing fairness notion and show the effectiveness of our proposed algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/06/2022

Marrying Fairness and Explainability in Supervised Learning

Machine learning algorithms that aid human decision-making may inadverte...
03/05/2018

On Discrimination Discovery and Removal in Ranked Data using Causal Graph

Predictive models learned from historical data are widely used to help c...
08/30/2018

Fair Algorithms for Learning in Allocation Problems

Settings such as lending and policing can be modeled by a centralized ag...
05/15/2022

Fair Bayes-Optimal Classifiers Under Predictive Parity

Increasing concerns about disparate effects of AI have motivated a great...
07/07/2021

Impossibility results for fair representations

With the growing awareness to fairness in machine learning and the reali...
09/06/2017

On Fairness and Calibration

The machine learning community has become increasingly concerned with th...
03/19/2021

Empirical Optimization on Post-Disaster Communication Restoration for Social Equality

Disasters are constant threats to humankind, and beyond losses in lives,...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Fair machine learning is receiving an increasing attention in machine learning fields. Discrimination is unfair treatment towards individuals based on the group to which they are perceived to belong. The first endeavor of the research community to achieve fairness is developing correlation or association-based measures, including demographic disparity (e.g., risk difference), mistreatment disparity, calibration, etc. [Romei and Ruggieri2014, Luong, Ruggieri, and Turini2011, Žliobaite, Kamiran, and Calders2011, Dwork et al.2012, Feldman et al.2015], which mainly focus on discovering the disparity of certain statistical metrics between two groups of individuals. However, as paid increasing attention recently [Zhang, Wu, and Wu2017b, Kilbertus et al.2017, Nabi and Shpitser2018], unlawful discrimination is a causal connection between the challenged decision and a protected characteristic, which cannot be captured by simple correlation or association concepts. To address this limitation, causal-based fairness measures have been proposed, including total effect [Zhang and Bareinboim2018b], direct and indirect discrimination [Zhang, Wu, and Wu2017b, Zhang and Bareinboim2018b, Chiappa and Gillam2019], and counterfactual fairness [Kusner et al.2017, Russell et al.2017]. Fairness notions have also been extended to considering both decisions in the training data and decisions made by predictive models, such as equality of opportunity and equal odds [Hardt et al.2016, Zafar et al.2017], and counterfactual direct and indirect error rates [Zhang and Bareinboim2018a].

In this paper, we develop a new causal-based fairness notation, called equality of effort. Consider a dataset with individuals with attributes where denotes a protected attribute such as gender with domain values , denotes a decision attribute such as loan with domain values , denotes a legitimate attribute such as credit score, and denotes a set of covariates. For a particular applicant in the dataset with profile

, she may ask the counterfactual question, how much her credit score she should improve such that the probability of her loan application approval is above a threshold

(e.g., ). Informally speaking, our proposed equality of effort notation addresses her concern on whether her future effort (the increase of her credit score) has no difference from male applicants with similar profile .

Following Rubin’s causal modeling notations, we use to represent the potential outcome for individual given a new treatment , to denote the individual-level expectation of outcome variable. If , we say applicant tends to receive loan approval with at least probability

. We can then calculate or estimate the minimum value of the treatment variable to achieve

-level outcome for individual . If the minimum value of individual is significantly higher than her counterparts (i.e., males with similar characteristics), discrimination exists in terms of effort discrepancy.

Our fairness notation, equality of effort, is different from existing fairness notions, e.g., statistical disparity, path-specific effects, which mainly focus on the the effect of the sensitive attribute on the decision attribute . Our proposed equality of effort instead focuses on to what extend the treatment variable should change to make the individual achieve a certain outcome level. This notation addresses the concerns whether the efforts that would need to make to achieve the same outcome level for individuals from the protected group and the efforts from the unprotected group are different. We develop algorithms for determining whether an individual or a group of individuals are discriminated in terms of equality of effort based on three widely used techniques for causal inference, outcome regression, propensity score weighting, and structural causal modeling. We also develop an optimization-based method for removing discriminatory efforts from biased datasets. We conduct empirical evaluations to compare the equality of effort and existing fairness notions and evaluation results show the effectiveness of our proposed algorithms.

Preliminaries

Notations

In this paper, an uppercase denotes a variable, e.g., ; a bold uppercase denotes a set of variables, e.g., ; a lowercase denotes a value or a set of values of the variables, e.g.,  and ; and a lowercase with superscript denotes a particular value, e.g.,  and .

Potential Outcomes Framework

The potential outcomes framework, also known as Neyman-Rubin potential outcomes or Rubin causal model, has been widely used in many research areas to perform causal inference. It refers to the outcomes one would see under each treatment option. Let be the outcome variable, be the binary or multiple valued ordinal treatment variable, and be the pre-treatment variables (covariates). represents the potential outcome for individual given treatment level and denotes the individual-level expectation of outcome variable. The “fundamental problem of causal inference” claims that one can never observe all the potential outcomes for any individual [Holland1986] and we need to compare potential outcomes and make inference from observed data. We use to denote population-level expectation of outcome variable and to denote the conditional expectation of outcome variable within certain sub-population .

Traditional causal inference focuses on estimating the potential outcome and treatment effect given the information of treatment variable and pre-treatment variables [Burgette, Griffin, and McCaffrey2017] . For example, the average treatment effect, answers the question of how, on average, the outcome of interest would change if everyone in the population of interest had been assigned to a particular treatment relative to if they had all received another treatment . The average treatment effect on the treated, is about how the average outcome would change if everyone who received one particular treatment had instead received another treatment . Under the potential outcomes framework, the outcome function usually has two forms: the regression form and the probability factorization form. Under certain assumptions we can represent the whole inverse process and derive corresponding inverse outcome function.

Propensity Score Method

One major challenge in causal inference is the presence of confounding variables. A confounder is the covariate that affects treatment variable and outcome variable simultaneously. Under the unconfoundedness assumption (no hidden confounders), propensity score method, as a widely used approach to achieve causal inferences from observational data, can reduce the selection bias caused by confounders.

Definition 1 (Propensity Score).

For a binary treatment variable, propensity score is the conditional probability of receiving treatment given the pre-treatment variables ,

The estimation of propensity scores requires the model or functional form of and the variables to include in . Let denote the propensity score for individual

, for binary valued groups, the propensity score is estimated by logistic regression:

where are values of the selected covariates and are regression coefficients.

If correctly estimated, the reciprocal of propensity score can be used as the weight for each individual such that the distribution of the group under treatment 1 and that under treatment 0 becomes identical. In other words, where is the estimator of the inverse propensity score for individual . [Rosenbaum and Rubin1983]

showed that conditional on the propensity score, all observed covariates are independent of treatment assignment, and they will not confound estimated treatment effects. Hence after weighting procedure, a pseudo-balanced population can be built in which the imbalance caused by measured covariates between the treatment groups has been eliminated. The average potential outcome can thus be estimated by some standard estimators. For example, one unbiased estimator of the population-level

can be written as: where and .

Fairness through Equal Effort

We assume a population with attributes where denotes a protected attribute with domain values , denotes a decision attribute with domain values , denotes a legitimate attribute, and denotes a set of covariates. Without losing generality, we assume there is only one binary protected attribute, one binary decision attribute, and one ordered multi-category legitimate attribute. In this paper, we simply use the change of as the effort needed to achieve a certain level of outcome and do not consider the real monetary or resource cost behind that change.

Equality of Effort at the Individual Level

For an individual in the dataset with profile , we want to figure out what is the minimal change on treatment variable to achieve a certain outcome level based on observational data. If the minimal change for individual has no difference from that of counterparts (individuals with similar profiles except the sensitive attribute), we say individual achieves fairness in terms of equality of effort.

Formally, we use to represent the potential outcome for individual given a new or counterfactual treatment . We use to denote the individual-level expectation of outcome variable where

is the expectation operator from probability theory. When

is larger than a predefined threshold , we say individual would receive a positive decision with probability .

Definition 2 (-Minimum Effort).

For individual with value , the minimum value of the treatment variable to achieve -level outcome is defined as:

and the minimum effort to achieve -level outcome is .

However cannot be directly observed and we have to derive its estimate from samples with similar characteristics. We design an estimation procedure based on the idea of situation testing, which is one normal practice of determining whether an individual is discriminated. How to select variables for finding similar individuals has been studied in situation testing based individual discrimination discovery [Zhang, Wu, and Wu2016]. The proposed idea there was to first construct a causal graph for all variables and then select variables that are the parents of the decision. Their work is also applicable to our equal effort definition. We first find a subset of users, denoted as , each of whom has the same (or similar) characteristics ( and ) as individual . We denote () the subgroup of users in with the sensitive attribute value (). Similarly, denotes the expected outcome under treatment for the subgroup . The minimal effort needed to achieve level of outcome variable within the subgroup is then defined as:

Definition 3 (-Equal Effort Fairness at the Individual Level).

For a certain outcome level , we define equality of effort for individual if

The difference measures the effort discrepancy at the individual level.

Equality of Effort at the Group or System Level

In addition to the task of checking individual level discrimination, we also want to check whether discrimination exists at the group or system level. System-level discrimination deals with the average discrimination across the whole system, e.g., all applicants to a university, and group-level discrimination deals with discrimination that occurs in one particular subgroup, e.g., the applicants applying for a particular major. Existing works [Žliobaite, Kamiran, and Calders2011, Zhang, Wu, and Wu2017b] apply demographic disparity metrics (e.g., risk difference) or causal effect (e.g., direct and indirect causal discrimination) on the whole dataset (the subset of data) to determine the system-level (group-level) discrimination. Similarly, we may want to check whether there are effort discrepancies at the group or system level.

We denote as the whole dataset, and () as the subset with the sensitive attribute value (). We define the minimum value of treatment variable to achieve a certain outcome level for as:

Definition 4 (-Equality of Effort at the System Level).

For a certain outcome level , equality of effort between two sensitive attributes and is achieved if

The difference measures the effort discrepancy at the system level.

Definition 4 can be straightforwardly adapted to the group level. Given two compared groups, their distributions in terms of certain attributes (e.g., outstanding debt) could be different. The simply use of our group equal-effort fairness may not be appropriate. In this case, we could apply the path-specific effect/mediator analysis [Zhang, Wu, and Wu2017b, Nabi and Shpitser2018] to separate and measure different causal effects e.g., direct discrimination, indirect discrimination, and explainable effects.

Comparison with Other Fairness Metrics

Notation References Formula
Demographic parity [Verma and Rubin2018]
Conditional parity [Verma and Rubin2018]
Total causal discrimination [Zhang, Wu, and Wu2017b, Zhang and Bareinboim2018b]
Path-specific causal discrimination [Zhang, Wu, and Wu2017b, Nabi and Shpitser2018]
Counterfactual fairness [Kusner et al.2017]
PC Fairness [Wu et al.2019] ]
Equality of opportunity [Hardt et al.2016, Zafar et al.2017]
Calibration [Hardt et al.2016, Zafar et al.2017]
Table 1: Formula of previous fairness notions

Many different fairness metrics have been proposed to measure fairness of data and machine learning algorithms. Classic metrics include individual fairness, demographic parity, equality of opportunity, calibration, causal fairness, and counterfactual causal fairness. Refer to a recent survey [Verma and Rubin2018]. We show in Table 1 the formula of previous representative fairness metrics to compare with our equality of effort notion. For example, demographic imparity requires that and similarly conditional demographic imparity requires where is the values of a specified variable set . Basically they require that a decision be independent of the protected attribute conditional or unconditional on some other variables. For causal based fairness notions, the total causal discrimination is based on the average causal effect of on and is defined as , which represents the expected change of outcome when of all individuals changes from to . Different from the total causal discrimination that measures the causal effect transmitted along all the causal paths from to in the causal graph, the path-specific causal discrimination is based on the causal effect that is transmitted along some specific paths from to , e.g., direct causal discrimination when is the direct path from to , and indirect causal discrimination when is all paths from to through redlining attribute . Counterfactual fairness requires , which means that a decision is fair towards an individual if it is the same in the actual world and a counterfactual world where the individual belonged to a different demographic group. Most recently, [Wu et al.2019] developed a unified definition, path-specific counterfactual fairness (PC Fairness), that covers previous causality-based fairness notations. Different from demographic parity and causal based fairness notions, our proposed equality of effort considers to what extend the legitimate variable should change to achieve a certain outcome level and whether the minimum effort made for individuals from the protected group and that from the unprotected group are the same.

When considering discrimination from the perspective of supervised learning, the equality of opportunity is based on the actual outcome

and the predicted outcome , requiring . Basically it means the decision model should not mistakenly predict examples with as at a higher rate for one group than another. In other words, a predictor satisfies equalized opportunity with respect to protected attribute and outcome if and are independent conditional on . Similarly the calibration considers the fraction of correct positive predictions and requires . Our proposed equality of opportunity does not consider the model predictions and instead focuses on the effort, i.e., the minimum change of to achieve a certain outcome level , based on the causal framework.

We noticed a parallel work [Heidari, Nanda, and Gummadi2019] that developed an effort-based measure of fairness and formulated effort unfairness as the inequality in the amount of effort required for members from disadvantage group and advantaged group. However, their work focused on characterizing the long-term impact of algorithmic policies on reshaping the underlying population based on the psychological literature on social learning and the economic literature on equality of opportunity. Our work is based on counterfactual causal inference and develops an optimization-based framework for removing discriminatory effort unfairness from the static data if discrimination is detected.

Calculating Average Effort Discrepancy

In real-world applications, we often have multiple values of used in decision making. We use the average effort discrepancy over all values of as the measure of equality of effort in this scenario. If has a set of discrete values, then the average is computed by the mean of all effort discrepancies. If is a continuous variable, then the average is defined as the integration over the range of .

Definition 5 (Average Effort Discrepancy (AED)).

If where denotes the effort level value set of the expectation of outcome variable, then the average effort discrepancy is defined as

(1)

If is a continuous variable in a range , then the average effort discrepancy is defined as

(2)

To calculate the AED, we need to first compute the expected outcome or , and then compute the minimum effort. In the following, we develop a general calculating method assuming the monotonicity and invertibility for . Then, we consider three widely used techniques for causal inference: outcome regression and propensity score weighting from Rubin’s framework, and structural causal analysis from Pearl’s framework. We compute the AED for each of the techniques.

Input Dataset , Threshold
Output Result

1:  For each subset , identify expected outcome
2:  if  is continuous, monotonous and invertible then
3:     Calculate according to Eq. (3)
4:  else
5:     Identify inverse function
6:     if  has a closed form then
7:        for each  do
8:           Find the minimum value of such that
9:           Calculate effort discrepancy
10:        end for
11:     else
12:        for each treatment level  do
13:           Use appropriate causal inference method to estimate
14:        end for
15:        for each  do
16:           Numerically find the minimum value of such that
17:           Calculate effort discrepancy
18:        end for
19:        Calculate following Definition 5
20:     end if
21:  end if
22:  if   then
23:     Result = True
24:  else
25:     Result = False
26:  end if
Algorithm 1 Discrimination detection through equal effort

Algorithm 1 shows the pseudocode of our algorithm for computing the AED and making the judge of discrimination through equal effort. Lines 2-3 deal with the situation where is a continuous, monotonous and invertible function of , and AED can be directly computed through an integration over given in the next subsection. If the assumptions are not satisfied, lines 6-10 handle the situation where the closed-form of inverse function can be derived; and lines 12-19 handle the situation otherwise.

General Method under Monotonicity and Invertibility Assumption

As discussed in the previous section, and denote the expectations of outcome variable for groups and . We can treat them as functions of , denoted as and . Under the assumptions of being monotonically increasing and invertible, inequality can be expressed as , which leads to , where is the inverse function of . As a result, we directly obtain that , and similarly .

If the closed forms of and can be derived, then the AED can be easily computed; otherwise its calculation is not straightforward. However, when is a continuous variable, then we don’t need to derive the closed form of the inverse functions to compute the AED, but only require the integration of and to be tractable. This is because based on the Laisant’s theorem we have

where and . In practice, and can be estimated using numerical methods. As a result, the AED is given by

(3)

Outcome Regression

Outcome regression is one straightforward method to conduct causal inference. In this approach, a model is posited for the outcome variable as a function of the treatment variable and the covariates. The basic outcome regression model is the linear regression of the form:

where are regression coefficients, and

are the coefficient vectors with the same length as

. All the parameters can be estimated by least squares method.

One advantage of outcome regression is it can help us directly calculate the relative treatment value given a certain expected outcome level. Suppose the regression model is correctly specified, the expected outcome of any subset is given by

Thus, the minimum value of the treatment variable to achieve -level outcome, i.e., , can be expressed as:

(4)

Propensity Score Weighting

Another widely used branch of causal inference is based on weighting and a typical method is the inverse propensity score weighting. In our context, the treatment variable is a multiple valued ordinal variable, we apply generalized propensity score

[Imbens2000] to estimate the weights.

Definition 6 (Generalized Propensity Score).

The generalized propensity score for individual is the conditional probability of receiving a particular level of the treatment given the pre-treatment variables:

The weighted mean of the potential outcomes for those who received the treatment had they received another treatment can be consistently estimated by

where

Following the above method, we can get a table showing estimation values of the expected outcome under all treatment pair combinations . Thus, the minimum treatment value to achieve can be determined by comparing the results in that table.

Structural Causal Model

The structural causal model describes the causal mechanisms of a system as a set of structural equations. For ease of representation, each causal model can be illustrated by a directed acyclic graph called the causal graph, where each node represents a variable and each edge represents the direct causal relationship specified by the causal model. In addition, each node

is associated with a conditional probability distribution

where is the realization of a set of variables called the parents of . The treatment is modeled using the intervention, which forces the treatment variable to take certain value , formally denoted by or . The potential outcome of variable under intervention is denoted as . The distribution of , also referred to as the post-intervention distribution of under , is denoted as . Facilitated by the intervention, the expected outcome can be measured by the counterfactual quantity , where represents attribute values that form the subgroup . The counterfactual quantity measures the expected outcome of assuming that the intervention is performed on the subgroup of individuals only. According to [Pearl2009], if attributes are non-descendant of in the causal graph, then can be computed from observational data as

where means assigning involved in all probabilities with the corresponding value .

If the inverse function of can be derived, then we follow lines 6-10 in Algorithm 1 to compute AED; otherwise, we follow lines 12-19 to compute AED.

Achieving Equal Effort

When our discrimination detection algorithm shows that a dataset does not satisfy the equal effort requirement, then we may want to remove the discriminatory effects from the dataset before it is used for any predictive analysis, i.e., training a decision model. In this section, we develop a method for generating a new dataset which is close to the original dataset and also satisfies equal effort. Our removal method is based on the use of outcome regression to estimate the potential outcome, but it can be easily extended to any method where the closed form of can be derived. The general idea is to derive a new outcome regression model satisfying the equal effort constraints. Then, for each individual in the original dataset, we randomly generate a new value based on the expectation computed from the fair outcome regression model.

Specifically, we consider two outcome regression models for subsets and respectively, given by

Then, as shown by Eq. (4), the minimum effort for subgroup (and similarly for subgroup ) is given by

As a result, the AED according to either Eq. (1) or (2) is given by

where equals if discrete and if continuous. We want the AED to approach zero. After adding the penalty term for the AED, the objective function becomes

where or and is the parameter for balancing the two objectives.

Finally, for each individual in the dataset with profile , we first compute his expected value of using the fair outcome regression model, i.e., , where or depending on the value of . Then, we randomly assign or to the new value based on the probability given by . The generated data then satisfies the equal effort requirement.

Category Original Values
Preschool, 1st-4th, 5th-6th
7th-8th, 9th, 10th
11th, 12th, HS-grad
Some-college, Assoc-voc, Assoc-acdm
Bachelors, Masters, Prof-school, Doctorate
Table 2: Preprocessing education.

Experiments

Figure 1: Causal Graph for the Adult Dataset.

We evaluate our discrimination detection and removal algorithms based on the proposed equality of effort on the UCI Adult dataset [Lichman2013]. The Adult dataset contains records with attributes. We select attributes, sex, age, marital status, workclass, education, hours, and income in our experiments. We consider income as the outcome, education as the treatment attribute, and sex

as the protected attribute. Due to the sparse data issue, we binarize the domain of

age, marital status, workclass, and hours into two classes. We also categorize 16 values of education into five levels, as shown in Table 2.

In our experiments, we calculate the minimum effort based on three methods, outcome regression (Regression), propensity score weighting (Weighting), and structural causal model inference (SCM). For Weighting, we implement the propensity score weighting for multiple treatments by following the work of [McCaffrey et al.2013] and [Burgette, Griffin, and McCaffrey2017]. For SCM, we follow the settings of [Zhang, Wu, and Wu2017b] and use three tiers for causal graph learning: sex, age in Tier 1, marital-status, education, workclass, and hours in Tier 2, and income

in Tier 3. The causal graph is constructed and presented by utilizing the open-source software TETRAD

[Scheines et al.1998]. We employ the original PC algorithm [Spirtes, Glymour, and Scheines2000] and set the significance threshold 0.01 for conditional independence setting in causal graph construction. Figure 1 shows the built causal graph. We apply the nonparametric inference of the structural causal model by following the work of [Zhang, Wu, and Wu2017a]

. In discrimination removal, the quadratic programming is solved using PyTorch

[Paszke et al.2017].

Discrimination Discovery

Checking Equal Effort at the System Level

Table 3 shows the comparison results of the expectations of the potential outcome for males () and that for females () in Adult. We calculate the expectation of the potential outcomes using three methods, Weighting, Regression, and SCM, and vary the treatment variable education from to . As shown in Table 3, the expectations of potential outcome for males are significantly higher than the corresponding values for females, indicating large effort discrepancy exists in Adult. For example, and when based on SCM. If we set , the minimum values of treatment variable (education) to achieve -level outcome are for males (with the expectation of the potential outcome ) and for females (with the expectation of the potential outcome ). The effort discrepancy between females and males is , which indicates the existence of significant discrimination in terms of equal effort fairness. We would like to point out that the expectations of potential outcome calculated from three methods are generally consistent as shown in Table 3. However, each calculation method has its own applicable assumptions and may not achieve reliable results when those assumptions are not met. There are extensive researches on the applicability of those causal inference methods (e.g., refer to [Pearl2009]), which are out of the scope of this work.

education sex=male sex=female
Weighting Regression SCM Weighting Regression SCM
0 0.196 0.086 0.164 0.048 0.026 0.057
1 0.269 0.214 0.239 0.066 0.051 0.075
2 0.513 0.491 0.498 0.211 0.190 0.221
3 0.736 0.781 0.741 0.416 0.497 0.469
4 0.842 0.933 0.859 0.485 0.807 0.706
Table 3: Expectation of the potential outcome for males and females in Adult dataset.

Checking Equal Effort at the Group Level

For the group level equality of effort, we split the Adult dataset into five groups by education. Individuals with the same education value form one group. For each group, we calculate the expectations of potential outcome for males () and females (). Due to space limit, we only report in Table 4 the expectations of the potential outcome variable for group one with education=0. Each expectation is calculated using three methods. We can see the significant discrepancy between males and females in this group. We also observe the similar phenomena in other four groups. When considering , the minimum education value to achieve the outcome for males in this group is 3 (with all expectation values from three methods close to 0.7) whereas the minimum education level for females is 4.

education sex=male sex=female
Weighting Regression SCM Weighting Regression SCM
1 0.225 0.232 0.227 0.071 0.084 0.081
2 0.457 0.462 0.467 0.205 0.205 0.224
3 0.692 0.694 0.719 0.418 0.411 0.497
4 0.810 0.870 0.842 0.497 0.693 0.754
Table 4: Expectations of the potential outcome for males and females with the original education=0.

Checking Equal Effort at the Individual Level

To detect effort discrepancy at the individual level, we need to first identify a subset of users with the same characteristics of the given individual and then split them into the male group () and female group (). We then calculate the expectations of potential outcome for the male group () and female group () with each treatment level . Due to space limit, we only report in Table 5 the results of three randomly chosen female users whose index numbers are 425, 9569, and 46437. Both users 1 and 2 have the original education value 1 and user 3 has education value 0. As shown in Table  5, the expectations of outcome for are consistently higher than , indicating the existence of discrimination in terms of equal effort for these three individuals. For example, results of user 3 show that the minimum effort for her to achieve -level outcome is education whereas the corresponding minimum effort to achieve the same level outcome is had she been a male.

education User 1 User 2 User 3
sex=male sex=female sex=male sex=female sex=male sex=female
0 0.012 0.006
1 0.022 0.007 0.058 0.030 0.051 0.024
2 0.085 0.036 0.206 0.134 0.188 0.096
3 0.282 0.159 0.523 0.438 0.501 0.317
4 0.624 0.487 0.823 0.796 0.813 0.669
Table 5: Expectation of the potential outcome for three randomly chosen individuals.

Discrimination Removal

We run our removal algorithm to remove discrimination in terms of equality of effort from the Adult dataset, and then run the discovery algorithm to further examine whether discrimination is truly removed in the modified dataset. For comparison, we include the removal algorithm (Denoted by DI) of [Feldman et al.2015], which removes discrimination from the demographic parity perspective. Basically, DI tries to modify such that the modified cannot be used to predict . The results show that, after executing our removal method (with ), the average difference between and for all s is , indicating all effort discrepancy has been removed. However, the average difference for the DI algorithm is , showing that DI does not remove effort discrepancy. Regarding data utility loss in terms of , our method also outperforms the DI algorithm in that the utility loss of our method is , while the utility loss of the DI algorithm is .

Conclusions and Future Work

In this paper, we proposed a new causal-based fairness notion called the equality of effort. Although previous fair notions can be used to judge discrimination from various perspectives (e.g., demographic parity, equal opportunity), they cannot quantify the (difference in) efforts that individuals need to make in order to achieve certain outcome levels. Our proposed notion, on the other hand, can help answer counterfactual questions like “how much credit score an applicant should improve such that the probability of her loan application approval is above a threshold”, and judge discrimination from the equal-effort perspective. To quantify the average effort discrepancy, we developed a general method under certain assumptions, as well as specific methods based on three widely used causal inference techniques. When equality effort is not achieved by a dataset, we also developed an optimization-based method to remove discrimination. In the experiments, we show that the Adult dataset does contain effort discrepancy at system, group, and also individual levels, and our removal method can ensure that the newly generated dataset satisfies equality of effort.

We made several assumptions in our paper including the no-hidden-confounder assumption, monotonicity of the expectation of outcome variable, and invertibility of outcome function. We also assumed one binary protected attribute and one binary decision for simplicity’s sake. The no-hidden-confounder assumption is a common assumption for causal inference [Pearl2009] and widely adopted by causal inference based fair learning. The monotonicity assumption reflects the real world phenomena (the more effort, the better outcome). The invertibility assumption is used in our general method of calculating the average effort discrepancy without deriving the closed form of the inverse function. When this invertibility assumption is not held, we have presented in our algorithm (Lines 12-19) several inference methods that could also have their limitations. Moreover, we implicitly assumed that the discrimination detection algorithm knows the same information as the decision-maker, i.e., there are no omitted variables used in decision making but invisible to the discrimination detection. In our future work, we will study how to achieve equal effort fairness when some of those assumptions are not met in practice.

In our paper, we used the change of treatment variable value as the effort needed to achieve a certain level of outcome and did not consider the real monetary or resource cost behind that change that are often not included in the data. If they are included in the data, the discrimination caused by these factors is known as indirect discrimination. We will study the use of path-specific effect/mediator analysis [Zhang, Wu, and Wu2017b, Nabi and Shpitser2018] to explicitly quantify the effect of treatment on final outcomes via proxy attributes.

Acknowledgments

This work was supported in part by NSF 1646654, 1920920, and 1940093.

References

  • [Burgette, Griffin, and McCaffrey2017] Burgette, L.; Griffin, B. A.; and McCaffrey, D. 2017. Propensity scores for multiple treatments: A tutorial for the mnps function in the twang package. R package. Rand Corporation.
  • [Chiappa and Gillam2019] Chiappa, S., and Gillam, T. P. 2019. Path-specific counterfactual fairness. In AAAI’19.
  • [Dwork et al.2012] Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–226. ACM.
  • [Feldman et al.2015] Feldman, M.; Friedler, S. A.; Moeller, J.; Scheidegger, C.; and Venkatasubramanian, S. 2015. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’15, 259–268. ACM Press.
  • [Hardt et al.2016] Hardt, M.; Price, E.; Srebro, N.; et al. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems, 3315–3323.
  • [Heidari, Nanda, and Gummadi2019] Heidari, H.; Nanda, V.; and Gummadi, K. P. 2019. On the long-term impact of algorithmic decision policies: Effort unfairness and feature segregation through social learning. CoRR abs/1903.01209.
  • [Holland1986] Holland, P. W. 1986. Statistics and causal inference. Journal of the American statistical Association 81(396):945–960.
  • [Imbens2000] Imbens, G. W. 2000. The role of the propensity score in estimating dose-response functions. Biometrika 87(3):706–710.
  • [Kilbertus et al.2017] Kilbertus, N.; Carulla, M. R.; Parascandolo, G.; Hardt, M.; Janzing, D.; and Schölkopf, B. 2017. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, 656–666.
  • [Kusner et al.2017] Kusner, M. J.; Loftus, J.; Russell, C.; and Silva, R. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems, 4066–4076.
  • [Lichman2013] Lichman, M. 2013. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml.
  • [Luong, Ruggieri, and Turini2011] Luong, B. T.; Ruggieri, S.; and Turini, F. 2011. k-nn as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 502–510. ACM.
  • [McCaffrey et al.2013] McCaffrey, D. F.; Griffin, B. A.; Almirall, D.; Slaughter, M. E.; Ramchand, R.; and Burgette, L. F. 2013. A tutorial on propensity score estimation for multiple treatments using generalized boosted models. Statistics in medicine 32(19):3388–3414.
  • [Nabi and Shpitser2018] Nabi, R., and Shpitser, I. 2018. Fair inference on outcomes. In Proceedings of AAAI’18, volume 2018.
  • [Paszke et al.2017] Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in pytorch.
  • [Pearl2009] Pearl, J. 2009. Causality. Cambridge university press.
  • [Romei and Ruggieri2014] Romei, A., and Ruggieri, S. 2014. A multidisciplinary survey on discrimination analysis.

    The Knowledge Engineering Review

    29(05):582–638.
  • [Rosenbaum and Rubin1983] Rosenbaum, P. R., and Rubin, D. B. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70(1):41–55.
  • [Russell et al.2017] Russell, C.; Kusner, M. J.; Loftus, J.; and Silva, R. 2017. When worlds collide: integrating different counterfactual assumptions in fairness. In Advances in Neural Information Processing Systems, 6414–6423.
  • [Scheines et al.1998] Scheines, R.; Spirtes, P.; Glymour, C.; Meek, C.; and Richardson, T. 1998. The tetrad project: Constraint based aids to causal model specification. Multivariate Behavioral Research 33(1):65–117.
  • [Spirtes, Glymour, and Scheines2000] Spirtes, P.; Glymour, C. N.; and Scheines, R. 2000. Causation, prediction, and search, volume 81. MIT press.
  • [Verma and Rubin2018] Verma, S., and Rubin, J. 2018. Fairness definitions explained. In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), 1–7. IEEE.
  • [Wu et al.2019] Wu, Y.; Zhang, L.; Wu, X.; and Tong, H. 2019. PC-fairness: A unified framework for measuring causality-based fairness. CoRR abs/1910.12586.
  • [Zafar et al.2017] Zafar, M. B.; Valera, I.; Rodriguez, M. G.; and Gummadi, K. P. 2017. Fairness constraints: Mechanisms for fair classification. In AISTATS.
  • [Zhang and Bareinboim2018a] Zhang, J., and Bareinboim, E. 2018a. Equality of opportunity in classification: A causal approach. In Advances in Neural Information Processing Systems, 3671–3681.
  • [Zhang and Bareinboim2018b] Zhang, J., and Bareinboim, E. 2018b. Fairness in decision-making—the causal explanation formula. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    .
  • [Zhang, Wu, and Wu2016] Zhang, L.; Wu, Y.; and Wu, X. 2016. Situation Testing-Based Discrimination Discovery: A Causal Inference Approach. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, volume 2016-Janua, 2718–2724. IJCAI/AAAI Press.
  • [Zhang, Wu, and Wu2017a] Zhang, L.; Wu, Y.; and Wu, X. 2017a. Achieving Non-Discrimination in Data Release. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017, 1335–1344. New York, New York, USA: ACM Press.
  • [Zhang, Wu, and Wu2017b] Zhang, L.; Wu, Y.; and Wu, X. 2017b. A causal framework for discovering and removing direct and indirect discrimination. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, 3929–3935.
  • [Žliobaite, Kamiran, and Calders2011] Žliobaite, I.; Kamiran, F.; and Calders, T. 2011. Handling conditional discrimination. In Data Mining (ICDM), 2011 IEEE 11th International Conference on, 992–1001. IEEE.