Comparing Fairness Criteria Based on Social Outcome

06/13/2018
by   Junpei Komiyama, et al.
0

Fairness in algorithmic decision-making processes is attracting increasing concern. When an algorithm is applied to human-related decision-making an estimator solely optimizing its predictive power can learn biases on the existing data, which motivates us the notion of fairness in machine learning. while several different notions are studied in the literature, little studies are done on how these notions affect the individuals. We demonstrate such a comparison between several policies induced by well-known fairness criteria, including the color-blind (CB), the demographic parity (DP), and the equalized odds (EO). We show that the EO is the only criterion among them that removes group-level disparity. Empirical studies on the social welfare and disparity of these policies are conducted.

READ FULL TEXT VIEW PDF
03/04/2019

On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning

Most existing notions of algorithmic fairness are one-shot: they ensure ...
04/18/2022

The Equity Framework: Fairness Beyond Equalized Predictive Outcomes

Machine Learning (ML) decision-making algorithms are now widely used in ...
05/31/2017

Subjective fairness: Fairness is in the eye of the beholder

We analyze different notions of fairness in decision making when the und...
11/19/2021

Towards Return Parity in Markov Decision Processes

Algorithmic decisions made by machine learning models in high-stakes dom...
05/31/2022

Social Bias Meets Data Bias: The Impacts of Labeling and Measurement Errors on Fairness Criteria

Although many fairness criteria have been proposed to ensure that machin...
06/29/2021

Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

Bias evaluation in machine-learning based services (MLS) based on tradit...
09/18/2021

Learning to be Fair: A Consequentialist Approach to Equitable Decision-Making

In the dominant paradigm for designing equitable machine learning system...

1 Introduction

The goal of the supervised learning is to estimate label

by learning an estimator as a function of associated feature . Arguably, an estimator of better predictive power is preferred, and standard supervised learning algorithm learns from existing data. However, when it is applied to human-related decision-making, such as employment, college admission, and credit, an estimator optimizing its predictive power can learn biases on the existing data. To address this issue, fairness-aware machine learning proposes methodologies that yield predictors that not only have better predictive power but also complies with some notion of non-discrimination.

Let be the (categorical) sensitive attribute among that represent the applicants’ identity (e.g., gender or race). Group level fairness concerns the inequality among groups of different . A naive approach, which we call color-blind [1], is to remove from in predicting : Although such an approach avoids direct discrimination through , the correlation between and the other attributes in causes indirect discrimination, which is referred to as the disparate impact. Another notion of fairness, which is widely studied (e.g., [2, 3, 4, 5]), is demographic parity (DP). DP requires the independence of from . For instance, a university admission comply with DP if each group has equal access to the university. The demographic parity is justified in the legal context in labor market: The U.S. Equal Employment Opportunity Commission [6] clarified the so-called 80%-rule, that prohibits employment decisions of non-negligible inequality. In spite of such legal background, some concerns on DP are raised. Hardt et al. [7]

argued that DP is incompatible with the perfect classifier

, and thus it is not appropriate when the true label is reliable. To address this issue, they provided an alternative notion of fairness called the equalized odds (EO), which requires the independence of from conditioned on and thus allows . Note that essentially the same notion is also proposed in Zafar et al. [8], and the notion of the counterfactual fairness [9] is similar to EO given a specific causal modeling. Note that DP and EO are mutually incompatible [10].

Despite massive interest in the fairness in machine learning, only a few of them concerned on the resulting social impact of a policy based on the proposed notion of fairness produces. The result of a policy is far from straightforward: In some case, an introduction of a naive notion of fairness can be harmful: For example, consider the case of a university admission policy. If the admission office discriminates blacks by believing they are less likely to perform well academically and lowers their admission standard for them to propel affirmative action, blacks may be discouraged to invest in their education because they pass the admission regardless of their effort. As a result, blacks may end up being less proficient and the negative stereotype “self-perpetuates”. Indeed, self-fulfillment of stereotype is an empirically documented phenomenon in some fields [11].The difficulty of analyzing this phenomenon lies in the interaction between the policy-maker and the applicants: when a policy changes, the applicants also change their behavior due to a modified incentive.

This lack of interest in the social outcome, in turn, results in the absence of a unified measure to compare different fairness criteria. In this regard, economic theory offers useful tools. In particular, literature in labor economics has a long history of analysis of welfare implication of policy changes. That is, economists investigate how the players’ welfare, or aggregate level of their utility, changes by imposing a policy.

By combining the theoretical framework developed in labor economics with the “oblivious” post-processing non-discriminatory machine learning [7], we propose a framework of comparison between different fairness notions in view of the incentives. We demonstrate such a comparison between several policies induced by well-known fairness criteria; color-blind (CB), demographic parity (DP), and equalized odds (EO). As a result, we show that while CB and DP sometimes disproportionately discourage unfavored groups from investing in the improvement of their value, EO equally incentivizes the two groups.

Importantly, our framework is not just theoretical but applicable to practices and enables to assess the fairness notions based on the actual situation. To demonstrate this point, we compare the fairness policies by using a real-world dataset: We show that (i) Unlike CB and DP, EO is disparity-free. Moreover, (ii) all of the CB, DP, and the EO tend to reduce social welfare compared to no fairness intervention case. Among them, EO yielded the lowest social welfare: One can view this as a cost of removing disparity.

1.1 Related work

A long line of works on discrimination and affirmative action policy exists in the literature of labor economics([12];[13];see Fang and Moro [14] for a survey of recent theoretical frameworks). Coate and Loury [1] considered a simple model where an employer infers applicants’ productivity based on one-dimensional signal, which contains information about their invested effort in skill. This nominal paper argues that even under the affirmative action policy to enforce the employer to set the same rate of hiring to all the groups, there still exist equilibria where one group is negatively stereotyped, and consequently, discouraged from investing in skills.

The problem of those analyses in economics is that their setting is abstract and simplified so that they do not allow us real-world applications with actual datasets. For instance, based on their simple model, Coate and Loury [1] states that “The simplest intervention would insist that employers make color-blind assignments” and it would ensure the fairness as well as the same incentives across groups. However, it is commonly perceived in machine learning that color-blind policy does not ensure fairness due to disparate impact [15, 16, 17]. Due to a lack of consideration on such learning-from-data process and related issues, frameworks proposed in economics are not designed for the real-world application. This paper modifies their models to be applicable to machine learning problems. More importantly, their main interest lies in affirmative action: While affirmative action that imposes a restriction on the outcome such as the ratio of admitted students (which is similar to demographic parity) is arguably important, modern machine learning algorithms propose various methodologies to ensure the fairness at the prediction level, not the outcome level.

A few papers in machine learning considered a game-theoretic view of decision-making processes and thus enable us to compare fairness criteria. In particular, the closest papers to ours are [18, 19]. Hu and Chen [18] considered a two-stage process and each stage dealt with group-level and individual-level fairness, whereas we are focusing on comparing the several notions of group-level fairness. Liu et al. [19] compared several notions of fairness including the demographic parity and the equalized opportunity in terms of its long-term improvements and characterized the conditions where each of these fairness-related constraints works. Unlike ours, the analysis in Liu et al. [19] assumes the availability of the function that determines how the delayed impact from the prediction arises. Identification of such a function requires us counter-factual experiments or model-dependent analyses. Moreover, they evaluate the fairness criteria by the disparity between groups, without analyzing the social welfare. By assuming a model with micro-foundation of players’ decision-making, we are able to compare the welfare implication of different fairness criteria.

2 Model

Figure 1: Sequence of timings.

We consider a game between a continuum of applicants and a single firm. The game models application processes, such as university admissions, job applications, and credit card applications. A firm has a dataset on the performance of applicants and uses it to estimate the performance of the future applicants. For the ease of discussion, we assume that there exist two groups: Applicant of each group is assigned a sensitive attribute . Let be the fraction of the applicants of , and be . Each of the applicants has an option to exert his or her effort, and before determining whether or not to exert the effort, the applicant is given a cost of that. Let be the variable that indicates the effort of an applicant. The applicant’s feature is drawn from a distribution.The effort is very relevant to the performance of the applicant, and thus the firm would like to admit all the applicants of (that we call the qualified applicants) and to dismiss the applicants of (that we call the unqualified applicants). If a qualified applicant is accepted, the firm earns revenue . If an unqualified applicant is accepted, the firm loses (= negative revenue). All the applicants prefer to be accepted, and let be the revenue of the applicant to be accepted. The firm uses the pre-trained classifier that estimates the effort of the applicant from the sensitive attribute and non-sensitive attributes . Following [7], we assume that the classifier is a function , where indicates how likely the applicant is qualified. Let and be the density and distribution of given and . Let be the distribution of the cost given . For the ease of discussion, we assume

be a uniform distribution over

. Figure 1 displays the timing of the interaction between the applicants and the firm. We pose the following assumption on the signal of the classifier.

Assumption 1

Monotone Likelihood Ratio Property (MLRP): is strictly increasing in for .

Namely, Assumption 1 states that the applicant of a larger is more likely to be qualified.

In the sequel, we discuss rational behavior of the firm (Section 2.1) and the applicants (Section 2.2).

2.1 Firm’s behavior

The MLRP (Assumption 1) motivates the firm to make a threshold of on the hiring decision. A rational firm, without fairness-related restriction, would optimize its revenue, and the optimal threshold of depends on the firm’s belief on the fraction of the qualified applicants: Let be the fraction of the qualified applicants given .

When the firm observes

, the probability of this applicant being qualified is

The firm accepts this applicant iff . Given the MLRP assumption, this is equivalent to set a threshold such that

(1)

Letting and , (1) is equivalent to

(2)

and the applicants with is approved.

2.2 Applicants’ behavior

Let be the expected increase of reward by exerting an effort. Given the firm’s threshold , is the incentive of the applicant to exert an effort. A rational applicant invests in skills iff his or her cost is smaller than : which implies

(3)

2.3 Laissez-faire Equilibria

Figure 2: Illustration of the equilibrium parameters . Assumption 1 implies that the FR curve is strictly decreasing, and the AR curve is unimodal. The value is the mode of the AR curve.

Section 2.1 (resp. 2.2) introduced the best response of the firm (resp. the applicants) to the belief in response to the action of the applicants (resp. the firm). When no fairness-related constraint is posed, a firm that fully exploits (that we call “Laissez-faire”, LF) will set different threshold for each . If the fraction of the qualified people and the threshold of hiring are exactly the two rate postulated by the beliefs, then the players on both sides cannot increase their revenue by deviating from the current actions: Namely, in the equilibrium holds:

Definition 1

(Laissez-Faire Equilibrium [1]) An equilibrium is a quadraple satisfying Equality 2 for and .

Figure 2 illustrates the beliefs on equilibria, which is the intersections of the following two curves. Namely, (i) the Firm-Response (FR) curve: that indicates the threshold that maximizes the firm’s revenue and (ii) the Applicant-Response (AR) curve: that indicates the incentive of the applicants. The following proposition holds:

Proposition 1

(Existence of multiple equilibria, Proposition 1 in Coate and Loury [1]) For each , there exist two or more intersections of the FR and AR curves if and only if there exists such that .

The proof directly follows from the monotonicity of FR and the unimodality of AR. As discussed by Coate and Loury [1], the existence of multiple intersections implies the existence of asymmetric equilibria where , even in the case signal is not biased (i.e., ). Such an asymmetric equilibrium discourages the unfavored group as implies the reduced incentive of the unfavored group.

2.4 Social Welfare

In accordance with Sections 2.1 and 2.2, we define the social welfare as follows: The firm’s welfare is

whereas the applicants’ welfare is

The social welfare is the sum of the two quantities above summed over the groups: Let . The quantity is the social welfare per applicant.

Theorem 1

(Equilibrium of the maximum social welfare) Fix . For group , let there be two equilibria , such that . Let the corresponding social welfares. Then, .

Proof 1

Note that, the fact that for are at equilibrium implies that

(4)

, and thus

(5)

The term is positive because is strictly increasing in . On the other hand, the monotonicity of the FR curve and imply . The second term is non-negative because , which is the function of , is decreasing it is a integration over applicants and each applicant takes maximum over (i) pay its cost to get reward or (ii) get reward , and both of the two options have decreasing reward in .

Theorem 1 states that the equilibria are ordered by : This matches our conception on the application process. The more effort the applicants pay, the more applicants the firm accepts, and the better the equilibrium is.

3 Fairness Criteria and Their Results

Section 2.3 shows that a lack of fairness constraint discourages the individuals of the unfavored group under an asymmetric equilibrium. A natural question is that, whether or not we can impose some non-distriminatory constraint on the firm’s decision-making to remove such asymmetric equilibria. This section compares several constraints that are discussed in the literature.

The first constraint is the one that adopts the same threshold to the two groups:

Definition 2

(Color blind (CB) policy) The firm decision is said to be color-blind iff . The equilibria under CB are characterized by a set of quadraples that satisfies following constraints: (i) Equality (3) holds for . (ii) Moreover, letting , the following holds:

where .

In other words, under CB the firm considers an optimization of single over a single group that mixed the two groups of . Contrary to the argument of [1] (as discussed in Section 1), CB potentially yields an unfair treatment between two groups when varies largely among two groups :

Proposition 2

There exists an equilibrium with under CB.

In the following, we show examples of the disparity in Proposition 2. Let

be a normal distribution with mean

and variance

. Let be if holds or otherwise,

Example 1

(Insufficient identification) Let and

(6)

and . As the classifier cannot consider explicitly, it utilizes the only dimension as . Assume that , and are such that there exists more than two equilibria as shown in Figure 2 for group . Remember that the equilibria under CB is determined by the interaction between the firm and a mixture of two groups . As the population of approaches , one can show that the of any equilibrium is arbitrarily close to the one of the equilibria for the majority , which has some capability of identifying or of person in , and thus is not very far from . In this case, most people of would be assigned to regardless of their efforts (which discourages them), and thus is close to whereas is not.

Another example is the case where predictive power of largely differs between two

Example 2

(signal of different accuracy) Let and be the orthogonal bases of .

(7)

In this case, a linear classifier can utilize a linear combination of the two basis to create a signal : The first (resp. the second) basis is for identifying the effort of people in (resp. ). For any threshold value of , such a signal is of very different incentive between groups . Due to the noisy signal, gives very little information on whether a person of exert an effort or not. When an equilibrium exists, the very little of would exert an effort, whereas a certain portion of would be incentivized to exert an effort.

The implication of the examples above is as follows: When the signal treats the two group differently, as is shown in the case of credit risk prediction [7] (Figure 4 therein), the accuracy of a classifier can vary among , which will make a mere application of CB fail.

We next consider the constraint of the demographic parity, which is arguably the most common notion of fairness in the context of fairness-aware machine learning.

Definition 3

(demographic parity, DP) The firm decision is said satisfy demographic parity iff . The equilibria under DP are characterized by a set of quadraples that satisfies following constraints: (i) Equality (3) holds for . (ii) Moreover, letting , the following holds:

s.t.

In other words, it equalizes the ratio of the people accepted among . However, as discussed in Coate and Loury [1], such a constraint does not remove disparity:

Proposition 3

There exists an equilibrium with under the demographic parity.

The formal construction of explicit example was shown in Coate and Loury [1] (Section B therein). Although they show some example where is discrete, it is not very difficult to empirically confirm that standard classifier can yields equilibria of as we empirically show in Section 4. At a word, an asymmetric equilibrium exists when (i) the ratio of minority is small, and (ii) the classifier is very accurate (i.e., is large). In such a case, the firm “patronizes” the minority of not exerting efforts (i.e., small ) because it is relatively cheaper to admit a small fraction of the unqualified minorities than dismissing many qualified majority applicants. The equilibrium is discouraging minorities as they have a little motivation for investing themselves when they know they are accepted regardless of their efforts.

Recent work [7, 8] proposed alternative criteria of fairness called equalized opportunity and equalized odds. Let and be the false positive (FP) and the true positive (TP) rate of the classifier, respectively. The equalized odds criterion requires to have the same Receiver Operating Characteristic (ROC) curve (i.e., a curve comprised of (FP, TP)) for both groups. When the data is biased, does not satisfy the equalize odds criterion [15, 16, 17]. In our simulation in Section 4, the classifier trained with a U.S. national survey dataset is biased towards the majority (Figure 3 (a)). To address this issue, Hardt et al. [7] proposed a post-processing that derives another classifier from the original signal . The following theorem states the feasible region of FP and TP rates of the derived predictor.

Theorem 2

(feasible region of a derived predictor [7]) Consider a two-dimensional convex region that is spanned by the (FP,TP)-curve and a line segment from to . The (FP,TP) of a derived predictor lies in the convex region.

In other words, any of an ROC curve is available as long as it is under the ROC curve of . The EO policy is formalized as follows:

Definition 4

(Equalized odds) The firm’s policy is said to be odds-equalized when a (derived) predictor satisfies and , and the assignment based on the derived signal is color-blind.

The following theorem states that the EO does not generate disparity: There exists no asymmetric equilibrium under a derived predictor of EO.

Theorem 3

For any equilibrium under EO, holds.

Proof 2

Let be the threshold at an equilibrium. From EO, is identical for two groups , and thus is also identical.

Note that Hardt et al. [7] also proposed a policy called equalized opportunity that only requires the equality of TP. By definition, any predictor of the equalized odds satisfies the equalized opportunity, but not vice versa. Unlike the equalized odds, the equalized opportunity can result in .

4 Simulation

(a) The ROC curve
(b)
(c)
Figure 3: (a) The ROC curves of a predictor trained with the NLSY dataset. Details of the dataset and the settings are described in Section 4. One can confirm that the convexity of the ROC curve is equivalent to MLRP. In the figure, the two ROC curves are fairly close to convex. (b)(c) The WR and AR curves estimated from the NLSY dataset.

To assess the social welfare and disparity on the equilibrium of the LF, CB, DP, and EO policies, we conducted numerical simulations.

policy LF CB DP EO
Disparity % % % %
SW
RW
Table 1: Results of the policies. The social welfare (SW), the welfare of the firm (FW), and disparity of the best equilibrium are shown. We set .

Dataset and Settings: We used the National Longitudinal Survey of Youth (NLSY97) dataset retrieved from https://www.bls.gov/nls/ that involves survey results by the U.S. Bureau of Labor Statistics that is intended to gather information on the labor market activities and other life events of several groups. We model a virtual company’s hiring decision assuming that the company does not have access to the applicants’ academic scores. We set to be whether each person’s GPA is or not. Sensitive attribute is the race of the person (: white, : black or African American). We have total 2,028 (resp. 782) people of (resp. ). and to be demographic features comprised of their school records, attitude towards life (voluntary and anti-moral activities of themselves and their peers), and geographical information during 1997 (corresponding to their late teenage). The reward (resp. ) are chosen to be (resp. ) dollars, which is the gap of the average income in 2015 (corresponding to their early thirties) between the people of GPA (resp. ) from all people: If a job market is in perfect competition, the wage is equivalent to the productivity of the workers that a company hires, and hiring a worker yields reward that is a gap between his or her productivity and the average wage. are chosen to be dollars, which models the gap between the salary of the firm and the minimum wage they would be able to obtain with minimal effort. The cost distribution is chosen to be uniform distribution from to , as the applicants of a cost above this value never exert effort. Note that our results are not very sensitive to these settings as long as multiple equilibria exist. We used the RidgeCV classifier of the scikit-learn library [20] to yield . The two thirds of the people are used to train the classifier, and the following results are tested by using the rest of them.

Results: From the ROC curve is shown in Figure 3 (a) one can see that the accuracy of the classifier varies among two groups: The GPA of the majority is more predictable than the minority. This might come from the fact that a classifier minimizes the cumulative empirical loss, and as a result it tends to fit to the majority. Figure 3 (b)(c) shows the best response of the applicants and the firm under LF. Generally, equilibrium values of is larger than that of . As a result, the social welfare per person in is usually larger than that of . Note that, in estimating the FR curve, we applied some averaging to make stable.

Based on the and in Figure 3, we conducted simulation to confirm the social welfare (SW) and disparity measured by (Table 1). In finding equilibria, we discretized and sought where the best response curves intersected. One can see that (i) The result of CB and DP are more or less the same as the one of LF: They did not remove disparity. DP even increases the disparity. Unlike these policies, EO is disparity-free. (ii) EO, which is the only policy that does not yield disparity, results in the smallest SW. This result is not very surprising because EO reduces the predictive power of the classifier for to match up with that for , which we may consider as a price of achieving incentive-level equality. Somewhat surprisingly, DP slightly increases SW, about which we discuss in Appendix A.

5 Conclusion

We have studied a game between many applicants and a firm, which models human-related decision-making scenes such as university admission, hiring, and credit risk assessments. Our framework that ties two lines of work in the theoretical labor economics and the machine learning provides a method to compare existing (or future) non-discriminatory policies in terms of their social welfare and disparity. The framework is rather simple, and many extensions can be considered (e.g., making the investment continuous value). Although we show that EO is the only available policy that does not yield disparity, it tends to reduce social welfare. Interesting directions of possible future work include a proposal of policy that balances the social welfare and disparity: A policy with minimal or no loss of social welfare that has a small disparity is desirable. Another possible line of future work lies in evaluating policies in online settings, such as multi-armed bandits [21].

Acknowledgement

The authors gratefully thank Hiromi Arai and Kazuto Fukuchi for useful discussions and insightful comments.

References

  • [1] Stephen Coate and Glenn C. Loury. Will affirmative-action policies eliminate negative stereotypes? The American Economic Review, 83(5):1220–1240, 1993.
  • [2] Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. Fairness-aware classifier with prejudice remover regularizer. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD, pages 35–50, 2012.
  • [3] Goce Ristanoski, Wei Liu, and James Bailey. Discrimination aware classification for imbalanced datasets. In 22nd ACM International Conference on Information and Knowledge Management, pages 1529–1532, 2013.
  • [4] Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, pages 325–333, 2013.
  • [5] Kazuto Fukuchi, Jun Sakuma, and Toshihiro Kamishima. Prediction with model-based neutrality. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD, pages 499–514, 2013.
  • [6] The United States Equal Employment Opportunity Commission. Uniform guidelines on employee selection procedures. March 2, 1979.
  • [7] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems, pages 3315–3323, 2016.
  • [8] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pages 1171–1180, 2017.
  • [9] Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 4069–4079, 2017.
  • [10] Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. In 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, pages 43:1–43:23, 2017.
  • [11] Dylan Glover, Amanda Pallais, and William Pariente. Discrimination as a self-fulfilling prophecy: Evidence from french grocery stores. The Quarterly Journal of Economics, 132(3):1219–1260, 2017.
  • [12] Kenneth J Arrow. What has economics to say about racial discrimination? The journal of economic perspectives, 12(2):91–100, 1998.
  • [13] Harry Holzer and David Neumark. Assessing affirmative action. Journal of Economic Literature, 38(3):483–568, 2000.
  • [14] Hanming Fang and Andrea Moro. Chapter 5 - theories of statistical discrimination and affirmative action: A survey. volume 1 of Handbook of Social Economics, pages 133 – 200. North-Holland, 2011.
  • [15] Latanya Sweeney. Discrimination in online ad delivery. Commun. ACM, 56(5):44–54, May 2013.
  • [16] J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine bias — there’s software used across the country to predict future criminals. and it’s biased against blacks.
  • [17] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Sorelle A. Friedler and Christo Wilson, editors, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 77–91. PMLR, 2018.
  • [18] Lily Hu and Yiling Chen. A short-term intervention for long-term fairness in the labor market. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, pages 1389–1398, 2018.
  • [19] Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed impact of fair machine learning. CoRR, abs/1803.04383, 2018.
  • [20] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • [21] Sampath Kannan, Michael J. Kearns, Jamie Morgenstern, Mallesh M. Pai, Aaron Roth, Rakesh V. Vohra, and Zhiwei Steven Wu. Fairness incentives for myopic agents. In Proceedings of the 2017 ACM Conference on Economics and Computation, EC ’17, pages 369–386, 2017.

Appendix A Does Non-discriminatory Policies Decrease Social Welfare?

We first explain the reason why DP sometimes increases the social welfare. In Figure 2 of the main paper, lies in the regions where the AR curve is increasing. Intuitively, this means that around making the requirement stricter encourages the applicants to invest in skills. Compared to LF, under DP the employer imposes a milder threshold on the disadvantaged group and stricter threshold on the advantaged group. Given the advantaged group is in , adopting DP encourages their investment, which can result in the improvement in the overall productivity of the group. When is very small, this effect offsets the loss of the efficiency due to hiring the minorities of less productivity, which sometimes result in an improvement of SW. As to the equalized odds, there can be some corner-case examples such that and depending on the shapes of the FR and AR curves: In such a case, EO increases the social welfare. In summary, when is small, DP sometimes improves SW as we saw in our experiment (Table 1). EO can increase SW in some corner-case, but we think such a case is very unusual.