1 Introduction
Machine learning commonly considers static objectives defined on a snapshot of the population at one instant in time; consequential decisions, in contrast, reshape the population over time. Lending practices, for example, can shift the distribution of debt and wealth in the population. Job advertisements allocate opportunity. School admissions shape the level of education in a community.
Existing scholarship on fairness in automated decisionmaking criticizes unconstrained machine learning for its potential to harm historically underrepresented or disadvantaged groups in the population (Executive Office of the President, 2016; Barocas and Selbst, 2016). Consequently, a variety of fairness criteria have been proposed as constraints on standard learning objectives. Even though, in each case, these constraints are clearly intended to protect the disadvantaged group by an appeal to intuition, a rigorous argument to that effect is often lacking.
In this work, we formally examine under what circumstances fairness criteria do indeed promote the longterm wellbeing of disadvantaged groups measured in terms of a temporal variable of interest. Going beyond the standard classification setting, we introduce a onestep feedback model of decisionmaking that exposes how decisions change the underlying population over time.
Our running example is a hypothetical lending scenario. There are two groups in the population with features described by a summary statistic, such as a credit score, whose distribution differs between the two groups. The bank can choose thresholds for each group at which loans are offered. While groupdependent thresholds may face legal challenges (Ross and Yinger, 2006), they are generally inevitable for some of the criteria we examine. The impact of a lending decision has multiple facets. A default event not only diminishes profit for the bank, it also worsens the financial situation of the borrower as reflected in a subsequent decline in credit score. A successful lending outcome leads to profit for the bank and also to an increase in credit score for the borrower.
When thinking of one of the two groups as disadvantaged, it makes sense to ask what lending policies (choices of thresholds) lead to an expected improvement in the score distribution within that group. An unconstrained bank would maximize profit, choosing thresholds that meet a breakeven point above which it is profitable to give out loans. One frequently proposed fairness criterion, sometimes called demographic parity, requires the bank to lend to both groups at an equal rate. Subject to this requirement the bank would continue to maximize profit to the extent possible. Another criterion, originally called equality of opportunity, equalizes the true positive rates between the two groups, thus requiring the bank to lend in both groups at an equal rate among individuals who repay their loan. Other criteria are natural, but for clarity we restrict our attention to these three.
Do these fairness criteria benefit the disadvantaged group? When do they show a clear advantage over unconstrained classification? Under what circumstances does profit maximization work in the interest of the individual? These are important questions that we begin to address in this work.
1.1 Contributions
We introduce a onestep feedback model that allows us to quantify the longterm impact of classification on different groups in the population. We represent each of the two groups and by a score distribution and respectively. The support of these distributions is a finite set
corresponding to the possible values that the score can assume. We think of the score as highlighting one variable of interest in a specific domain such that higher score values correspond to a higher probability of a positive outcome. An
institution chooses selection policies that assign to each value in a number representing the rate of selection for that value. In our example, these policies specify the lending rate at a given credit score within a given group. The institution will always maximize their utility (defined formally later) subject to either (a) no constraint, or (b) equality of selection rates, or (c) equality of true positive rates.We assume the availability of a function such that provides the expected change in score for a selected individual at score . The central quantity we study is the expected difference in the mean score in group that results from an institutions policy, defined formally in Equation (2). When modeling the problem, the expected mean difference can also absorb external factors such as “reversion to the mean” so long as they are meanpreserving. Qualitatively, we distinguish between longterm improvement (), stagnation (), and decline (). Our findings can be summarized as follows:

Both fairness criteria (equal selection rates, equal true positive rates) can lead to all possible outcomes (improvement, stagnation, and decline) in natural parameter regimes. We provide a complete characterization of when each criterion leads to each outcome in Section 3.

We introduce the notion of an outcome curve (Figure 1) which succinctly describes the different regimes in which one criterion is preferable over the others.

We perform experiments on FICO credit score data from 2003 and show that under various models of bank utility and score change, the outcomes of applying fairness criteria are in line with our theoretical predictions.

We discuss how certain types of measurement error (e.g., the bank underestimating the repayment ability of the disadvantaged group) affect our comparison. We find that measurement error narrows the regime in which fairness criteria cause decline, suggesting that measurement should be a factor when motivating these criteria.

We consider alternatives to hard fairness constraints.

We evaluate the optimization problem where fairness criterion is a regularization term in the objective. Qualitatively, this leads to the same findings.

We discuss the possibility of optimizing for group score improvement directly subject to institution utility constraints. The resulting solution provides an interesting possible alternative to existing fairness criteria.

We focus on the impact of a selection policy over a single epoch. The motivation is that the designer of a system usually has an understanding of the time horizon after which the system is evaluated and possibly redesigned. Formally, nothing prevents us from repeatedly applying our model and tracing changes over multiple epochs. In reality, however, it is plausible that over greater time periods, economic background variables might dominate the effect of selection.
Reflecting on our findings, we argue that careful temporal modeling is necessary in order to accurately evaluate the impact of different fairness criteria on the population. Moreover, an understanding of measurement error is important in assessing the advantages of fairness criteria relative to unconstrained selection. Finally, the nuances of our characterization underline how intuition may be a poor guide in judging the longterm impact of fairness constraints.
1.2 Related work
Recent work by Hu and Chen (2018) considers a model for longterm outcomes and fairness in the labor market. They propose imposing the demographic parity constraint in a temporary labor market in order to provably achieve an equitable longterm equilibrium in the permanent labor market, reminiscent of economic arguments for affirmative action (Foster and Vohra, 1992). The equilibrium analysis of the labor market dynamics model allows for specific conclusions relating fairness criteria to long term outcomes. Our general framework is complementary to this type of domain specific approach.
Fuster et al. (2017) consider the problem of fairness in credit markets from a different perspective. Their goal is to study the effect of machine learning on interest rates in different groups at an equilibrium, under a static model without feedback.
Ensign et al. (2017) consider feedback loops in predictive policing, where the police more heavily monitor high crime neighborhoods, thus further increasing the measured number of crimes in those neighborhoods. While the work addresses an important temporal phenomenon using the theory of urns, it is rather different from our onestep feedback model both conceptually and technically.
Demographic parity and its related formulations have been considered in numerous papers (e.g. Calders et al., 2009; Zafar et al., 2017). Hardt et al. (2016) introduced the equality of opportunity constraint that we consider and demonstrated limitations of a broad class of criteria. Kleinberg et al. (2017) and Chouldechova (2016) point out the tension between “calibration by group” and equal true/false positive rates. These tradeoffs carry over to some extent to the case where we only equalize true positive rates (Pleiss et al., 2017).
A growing literature on fairness in the “bandits” setting of learning (see Joseph et al., 2016, et sequelae) deals with online decision making that ought not to be confused with our onestep feedback setting. Finally, there has been much work in the social sciences on analyzing the effect of affirmative action (see e.g., Keith et al., 1985; Kalev et al., 2006).
1.3 Discussion
In this paper, we advocate for a view toward longterm outcomes in the discussion of “fair” machine learning. We argue that without a careful model of delayed outcomes, we cannot foresee the impact a fairness criterion would have if enforced as a constraint on a classification system. However, if such an accurate outcome model is available, we show that there are more direct ways to optimize for positive outcomes than via existing fairness criteria. We outline such an outcomebased solution in Section 4.3. Specifically, in the credit setting, the outcomebased solution corresponds to giving out more loans to the protected group in a way that reduces profit for the bank compared to unconstrained profit maximization, but avoids loaning to those who are unlikely to benefit, resulting in a maximally improved group average credit score. The extent to which such a solution could form the basis of successful regulation depends on the accuracy of the available outcome model.
This raises the question if our model of outcomes is rich enough to faithfully capture realistic phenomena. By focusing on the impact that selection has on individuals at a given score, we model the effects for those not selected as zeromean. For example, not getting a loan in our model has no negative effect on the credit score of an individual.^{1}^{1}1In reality, a denied credit inquiry may lower one’s credit score, but the effect is small compared to a default event.
This does not mean that wrongful rejection (i.e., a false negative) has no visible manifestation in our model. If a classifier has a higher false negative rate in one group than in another, we expect the classifier to increase the disparity between the two groups (under natural assumptions). In other words, in our outcomebased model, the harm of denied opportunity manifests as growing disparity between the groups. The cost of a false negative could also be incorporated directly into the outcomebased model by a simple modification (see Footnote
2). This may be fitting in some applications where the immediate impact of a false negative to the individual is not zeromean, but significantly reduces their future success probability.In essence, the formalism we propose requires us to understand the twovariable causal mechanism that translates decisions to outcomes. This can be seen as relaxing the requirements compared with recent work on avoiding discrimination through causal reasoning that often required stronger assumptions (Kusner et al., 2017; Nabi and Shpitser, 2017; Kilbertus et al., 2017). In particular, these works required knowledge of how sensitive attributes (such as gender, race, or proxies thereof) causally relate to various other variables in the data. Our model avoids the delicate modeling step involving the sensitive attribute, and instead focuses on an arguably more tangible economic mechanism. Nonetheless, depending on the application, such an understanding might necessitate greater domain knowledge and additional research into the specifics of the application. This is consistent with much scholarship that points to the contextsensitive nature of fairness in machine learning.
2 Problem Setting
We consider two groups and , which comprise a and fraction of the total population, and an institution which makes a binary decision for each individual in each group, called selection. Individuals in each group are assigned scores in , and the scores for group are distributed according . The institution selects a policy , where corresponds to the probability the institution selects an individual in group with score . One should think of a score as an abstract quantity which summarizes how well an individual is suited to being selected; examples are provided at the end of this section.
We assume that the institution is utilitymaximizing, but may impose certain constraints to ensure that the policy is fair, in a sense described in Section 2.2. We assume that there exists a function , such that the institution’s expected utility for a policy is given by
(1) 
Novel to this work, we focus on the effect of the selection policy on the groups and . We quantify these outcomes in terms of an average effect that a policy has on group . Formally, for a function , we define the average change of the mean score for group
(2) 
We remark that many of our results also go through if simply refers to an abstract change in wellbeing, not necessarily a change in the mean score. Furthermore, it is possible to modify the definition of such that it directly considers outcomes of those who are not selected.^{2}^{2}2 If we consider functions and to represent the average effect of selection and nonselection respectively, then . This model corresponds to replacing in the original outcome definition with , and adding a offset . Under the assumption that increases in , this model gives rise to outcomes curves resembling those in Figure 1 up to vertical translation. All presented results hold unchanged under the further assumption that . Lastly, we assume that the success of an individual is independent of their group given the score; that is, the score summarizes all relevant information about the success event, so there exists a function such that individuals of score succeed with probability .
We now introduce the specific domain of credit scores as a running example in the rest of the paper, after which we present two more examples showing the general applicability of our formulation to many domains.
Example 2.1 (Credit scores).
In the setting of loans, scores represent credit scores, and the bank serves as the institution. The bank chooses to grant or refuse loans to individuals according to a policy . Both bank and personal utilities are given as functions of loan repayment, and therefore depend on the success probabilities , representing the probability that any individual with credit score can repay a loan within a fixed time frame. The expected utility to the bank is given by the expected return from a loan, which can be modeled as an affine function of : , where denotes the profit when loans are repaid and the loss when they are defaulted on. Individual outcomes of being granted a loan are based on whether or not an individual repays the loan, and a simple model for may also be affine in : , modified accordingly at boundary states. The constant denotes the gain in credit score if loans are repaid and is the score penalty in case of default.
Example 2.2 (Advertising).
A second illustrative example is given by the case of advertising agencies making decisions about which groups to target. An individual with product interest score responds positively to an ad with probability . The ad agency experiences utility related to clickthrough rates, which increases with . Individuals who see the ad but are uninterested may react negatively (becoming less interested in the product), and encodes the interest change. If the product is a positive good like education or employment opportunities, interest can correspond to wellbeing. Thus the advertising agency’s incentives to only show ads to individuals with extremely high interest may leave behind groups whose interest is lower on average. A related historical example occurred in advertisements for computers in the 1980s, where male consumers were targeted over female consumers, arguably contributing to the current gender gap in computing.
Example 2.3 (College Admissions).
The scenario of college admissions or scholarship allotments can also be considered within our framework. Colleges may select certain applicants for acceptance according to a score , which could be thought encode a “college preparedness” measure. The students who are admitted might “succeed” (this could be interpreted as graduating, graduating with honors, finding a job placement, etc.) with some probability depending on their preparedness. The college might experience a utility corresponding to alumni donations, or positive rating when a student succeeds; they might also show a drop in rating or a loss of invested scholarship money when a student is unsuccessful. The student’s success in college will affect their later success, which could be modeled generally by . In this scenario, it is challenging to ensure that a single summary statistic captures enough information about a student; it may be more appropriate to consider
as a vector as well as more complex forms of
.While a variety of applications are modeled faithfully within our framework, there are limitations to the accuracy with which reallife phenomenon can be measured by strictly binary decisions and success probabilities. Such binary rules are necessary for the definition and execution of existing fairness criteria, (see Sec. 2.2) and as we will see, even modeling these facets of decision making as binary allows for complex and interesting behavior.
2.1 The Outcome Curve
We now introduce important outcome regimes, stated in terms of the change in average group score. A policy is said to cause active harm to group if , stagnation if , and improvement if . Under our model, policies can be chosen in a standard fashion which applies the same threshold for both groups, and is agnostic to the distributions and . Hence, if we define
(3) 
we say that a policy causes relative harm to group if , and relative improvement if . In particular, we focus on these outcomes for a disadvantaged group, and consider whether imposing a fairness constraint improves their outcomes relative to the strategy. From this point forward, we take to be disadvantaged or protected group.

Figure 1 displays the important outcome regimes in terms of selection rates . This succinct characterization is possible when considering decision rules based on (possibly randomized) score thresholding, in which all individuals with scores above a threshold are selected. In Section 5, we justify the restriction to such threshold policies by showing it preserves optimality. In Section 5.1, we show that the outcome curve is concave, thus implying that it takes the shape depicted in Figure 1. To explicitly connect selection rates to decision policies, we define the rate function which returns the proportion of group selected by the policy. We show that this function is invertible for a suitable class of threshold policies, and in fact the outcome curve is precisely the graph of the map from selection rate to outcome . Next, we define the values of that mark boundaries of the outcome regions.
Definition 2.1 (Selection rates of interest).
Given the protected group , the following selection rates are of interest in distinguishing between qualitatively different classes of outcomes (Figure 1). We define as the selection rate for under ; as the harm threshold, such that ; as the selection rate such that is maximized; as the outcomecomplement of the selection rate, with .
2.2 Decision Rules and Fairness Criteria
We will consider policies that maximize the institution’s total expected utility, potentially subject to a constraint: which enforces some notion of “fairness”. Formally, the institution selects . We consider the three following constraints:
Definition 2.2 (Fairness criteria).
The maximum utility () policy corresponds to the nullconstraint , so that the institution is free to focus solely on utility. The demographic parity () policy results in equal selection rates between both groups. Formally, the constraint is The equal opportunity () policy results in equal true positive rates (TPR) between both group, where TPR is defined as . ensures that the conditional probability of selection given that the individual will be successful is independent of the population, formally enforced by the constraint
Just as the expected outcome can be expressed in terms of selection rate for threshold policies, so can the total utility . In the unconstrained cause, varies independently over the selection rates for group and ; however, in the presence of fairness constraints the selection rate for one group determines the allowable selection rate for the other. The selection rates must be equal for , but for we can define a transfer function, , which for every loan rate in group gives the loan rate in group that has the same true positive rate. Therefore, when considering threshold policies, decision rules amount to maximizing functions of single parameters. This idea is expressed in Figure 2, and underpins the results to follow.
3 Results
In order to clearly characterize the outcome of applying fairness constraints, we make the following assumption.
Assumption 1 (Institution utilities).
The institution’s individual utility function is more stringent than the expected score changes, . (For the linear form presented in Example 2.1, is necessary and sufficient.)
This simplifying assumption quantifies the intuitive notion that institutions take a greater risk by accepting than the individual does by applying. For example, in the credit setting, a bank loses the amount loaned in the case of a default, but makes only interest in case of a payback. Using Assumption 1, we can restrict the position of on the outcome curve in the following sense.
Proposition 3.1 ( does not cause active harm).
Under Assumption 1, .
We direct the reader to Appendix C for the proof of the above proposition, and all subsequent results presented in this section. The results are corollaries to theorems presented in Section 6.
3.1 Prospects and Pitfalls of Fairness Criteria
We begin by characterizing general settings under which fairness criteria act to improve outcomes over unconstrained strategies. For this result, we will assume that group is disadvantaged in the sense that the acceptance rate for is large compared to relevant acceptance rates for .
Corollary 3.2 (Fairness Criteria can cause Relative Improvement).
(a) Under the assumption that and , there exist population proportions such that, for all , . That is, causes relative improvement.
(b) Under the assumption that there exist such that , there exist population proportions such that, for all , . That is, causes relative improvement.
This result gives the conditions under which we can guarantee the existence of settings in which fairness criteria cause improvement relative to . Relying on machinery proved in Section 6, the result follows from comparing the position of optima on the utility curve to the outcome curve. Figure 2 displays a illustrative example of both the outcome curve and the institutions’ utility as a function of the selection rates in group . In the utility function (1), the contributions of each group are weighted by their population proportions , and thus the resulting selection rates are sensitive to these proportions.
As we see in the remainder of this section, fairness criteria can achieve nearly any position along the outcome curve under the right conditions. This fact comes from the potential mismatch between the outcomes, controlled by , and the institution’s utility .
The next theorem implies that can be bad for long term wellbeing of the protected group by being overgenerous, under the mild assumption that :
Corollary 3.3 ( can cause harm by being overeager).
Fix a selection rate . Assume that . Then, there exists a population proportion such that, for all , . In particular, when , causes active harm, and when , causes relative harm.
The assumption implies that a policy which selects individuals from group at the selection rate that would have used for group necessarily lowers average score in . This is one natural notion of protected group ’s ‘disadvantage’ relative to group . In this case, penalizes the scores of group even more than a naive policy, as long as group proportion is small enough. Again, small is another notion of group disadvantage.
Using credit scores as an example, Corollary 3.3 tells us that an overly aggressive fairness criterion will give too many loans to people in a protected group who cannot pay them back, hurting the group’s credit scores on average. In the following theorem, we show that an analogous result holds for .
Corollary 3.4 ( can cause harm by being overeager).
Suppose that and . Then, there exists a population proportion such that, for all , . In particular, when , causes active harm, and when , causes relative harm.
We remark that in Corollary 3.4, we rely on the transfer function, , which for every loan rate in group gives the loan rate in group that has the same true positive rate. Notice that if were the identity function, Corollary 3.3 and Corollary 3.4 would be exactly the same. Indeed, our framework (detailed in Section 6 and Appendix B) unifies the analyses for a large class of fairness constraints that includes and as specific cases, and allows us to derive results about impact on using general techniques. In the next section, we present further results that compare the fairness criteria, demonstrating the usefulness of our technical framework.
3.2 Comparing and
Our analysis of the acceptance rates of and in Section 6 suggests that it is difficult to compare and without knowing the full distributions , which is necessary to compute the transfer function . In fact, we have found that settings exist both in which causes harm while causes improvement and in which causes improvement while causes harm. There cannot be one general rule as to which fairness criteria provides better outcomes in all settings. We now present simple sufficient conditions on the geometry of the distributions for which is always better than in terms of .
Corollary 3.5 ( may avoid active harm where fails).
Fix a selection rate . Suppose are identical up to a translation with , i.e. . For simplicity, take to be linear in . Suppose
Then there exists an interval , such that , while , . In particular, when , this implies causes active harm but causes improvement for , but for any such that causes improvement, also causes improvement.
To interpret the conditions under which Corollary 3.5 holds, consider when we might have . This is precisely when , that is, for a policy that selects every individual whose score is above the group mean, which is reasonable in reality. Indeed, the converse would imply that group has such low scores that even selecting all above average individuals in would hurt the average score. In such a case, Corollary 3.5 suggests that is better than at avoiding active harm, because it is more conservative. A natural question then is: can cause relative harm by being too stingy?
Corollary 3.6 ( never loans less than , but might).
Recall the definition of the TPR functions , and suppose that the policy is such that
(4) 
Then . That is, causes relative harm by selecting at a rate lower than .
The above theorem shows that is never stingier than to the protected group , as long as a is disadvantaged in the sense that selects a larger proportion of than . On the other hand, can select less of group than , and by definition, cause relative harm. This is a surprising result about , and this phenomenon arises from high levels of ingroup inequality for group . Moreover, we show in Appendix C that there are parameter settings where the conditions in Corollary 3.6 are satisfied even under a stringent notion of disadvantage we call CDF domination, described therein.
4 Relaxations of Constrained Fairness
4.1 Regularized fairness
In many cases, it may be unrealistic for an institution to ensure that fairness constraints are met exactly. However, one can consider “soft” formulations of fairness constraints which either penalized the differences in acceptance rate () or the differences in TPR (). In Appendix B, we formulate these soft constraints as regularized objectives. For example, a soft can be rendered as
(5) 
where is a regularization parameter, and
is a convex regularization function. We show that the solutions to these objectives are threshold policies, and can be fully characterized in terms of the groupwise selection rate. We also make rigorous the notion that policies which solve the softconstraint objective interpolate between
policies at and hardconstrained policies ( or ) as . This fact is clearly demonstrated by the form of the solutions in the special case of the regularization function , provided in the appendix.4.2 Fairness Under Measurement Error
Next, consider the implications of an institution with imperfect knowledge of scores. Under a simple model in which the estimate of an individual’s score
is prone to errors such that . Constraining the error to be negative results in the setting that scores are systematically underestimated. In this setting, it is equivalent to consider the CDF of underestimated distribution to be dominated by the CDF true distribution , that is for all . Then we can compare the institution’s behavior under this estimation to its behavior under the truth.Proposition 4.1 (Underestimation causes underselection).
Fix the distribution of as and let be the acceptance rate of when the institution makes the decision using perfect knowledge of the distribution . Denote as the acceptance rate when the group is instead taken as . Then and . If the errors are further such that the true TPR dominates the estimated TPR, it is also true that .
Because fairness criteria encourage a higher selection rate for disadvantaged groups (Corollary 3.2), systematic underestimation widens the regime of their applicability. Furthermore, since the estimated policy underloans, the region for relative improvement in the outcome curve (Figure 1) is larger, corresponding to more regimes under which fairness criteria can yield favorable outcomes. Thus the potential for measurement error should be a factor when motivating these criteria.
4.3 Outcomebased alternative
As explained in the preceding sections, fairness criteria may actively harm disadvantaged groups. It is thus natural to consider a modified decision rule which involves the explicit maximization of . In this case, imagine that the institution’s primary goal is to aid the disadvantaged group, subject to a limited profit loss compared to the maximum possible expected profit . The corresponding problem is as follows.
(6) 
Unlike the fairness constrained objective, this objective no longer depends on group and instead depends on our model of the mean score change in group , .
Proposition 4.2 (Outcomebased solution).
In the above setting, the optimal bank policy is a threshold policy with selection rate , where is the outcomeoptimal loan rate and is the maximum loan rate under the bank’s “budget”.
The above formulation’s advantage over fairness constraints is that it directly optimizes the outcome of and can be approximately implemented given reasonable ability to predict outcomes. Importantly, this objective shifts the focus to outcome modeling, highlighting the importance of domain specific knowledge. Future work can consider strategies that are robust to outcome model errors.
5 Optimality of Threshold Policies
Next, we move towards statements of the main theorems underlying the results presented in Section 3. We begin by establishing notation which we shall use throughout. Recall that denotes the Hadamard product between vectors. We identify functions mapping with vectors in . We also define the groupwise utilities
(7) 
so that for , .
First, we formally describe threshold policies, and rigorously justify why we may always assume without loss of generality that the institution adopts policies of this form.
Definition 5.1 (Threshold selection policy).
A single group selection policy is called a threshold policy if it has the form of a randomized threshold on score:
(8) 
As a technicality, if no members of a population have a given score , there may be multiple threshold policies which yield equivalent selection rates for a given population. To avoid redundancy, we introduce the notation to mean that the set of scores on which and differ has probability under ; formally, . For any distribution , is an equivalence relation. Moreover, we see that if , then and both provide the same utility for the institution, induce the same outcomes for individuals in group , and have the same selection and true positive rates. Hence, if is an optimal solution to any of , , or , so is any for which and .
For threshold policies in particular, their equivalence class under is uniquely determined by the selection rate function,
(9) 
which denotes the fraction of group which is selected. Indeed, we have the following lemma (proved in Appendix A.1):
Lemma 5.1.
Let and be threshold policies. Then if and only if . Further, is a bijection from to , where is the set of equivalence classes between threshold policies under . Finally, is well defined.
Remark that is an equivalence class rather than a single policy. However, is well defined, meaning that for any two policies in the same equivalence class. Since all quantities of interest will only depend on policies through , it does not matter which representative of we pick. Hence, abusing notation slightly, we shall represent by choosing one representative from each equivalence class under ^{3}^{3}3One way to do this is to consider the set of all threshold policies such that, if and if and ..
It turns out the policies which arise in this away are always optimal in the sense that, for a given loan rate , the threshold policy is the (essentially unique) policy which maximizes both the institution’s utility and the utility of the group. Defining the groupwise utility,
(10) 
we have the following result:
Proposition 5.1 (Threshold policies are preferable).
Suppose that and are strictly increasing in . Given any loaning policy for population with distribution , then the policy satisfies
(11) 
Moreover, both inequalities hold with equality if and only if .
The map can be thought of transforming an arbitrary policy into a threshold policy with the same selection rate. In this language, the above proposition states that this map never reduces institution utility or individual outcomes. We can also show that optimal and policies are threshold policies, as well as all policies under an additional assumption:
Proposition 5.2 (Existance of optimal threshold policies under fairness constraints).
Suppose that is strictly increasing in . Then all optimal policies satisfy for . The same holds for all optimal policies, and if in addition is increasing, the same is true for all optimal policies.
To prove proposition 5.1, we invoke the following general lemma which is proved using standard convex analysis arguments (in Appendix A.2):
Lemma 5.2.
Let , and let , and suppose either that is increasing in , and is increasing or, . Let and fix . Then any
(12) 
satisfies . Moreover, at least one maximizer exists.
Proof of Proposition 5.1.
We will first prove Proposition 5.1 for the function . Given our nominal policy , let . We now apply Lemma 5.2 with and . For this choice of and , and that . Then, if , Lemma 12 implies that .
On the other hand, assume that . We show that is a maximizer; which will imply that is a maximizer since implies that . By Lemma 5.2 there exists a maximizer , which means that . Since is feasible, we must have , and thus , as needed. The same argument follows verbatim if we instead choose , and compute . ∎
We now argue Proposition 5.2 for , as it is a straightforward application of Lemma 5.2. We will prove Proposition 5.2 for and separately in Sections 6.1 and 6.2.
5.1 Quantiles and Concavity of the Outcome Curve
To further our analysis, we now introduce left and right quantile functions, allowing us to specify thresholds in terms of both selection rate and score cutoffs.
Definition 5.2 (Upper quantile function).
Define to be the upper quantile function corresponding to , i.e.
(13) 
Crucially is continuous from the right, and is continuous from the left. Further, and allow us to compute derivatives of key functions, like the mapping from selection rate to the group outcome associated with a policy of that rate, . Because we take to have discrete support, all functions in this work are piecewise linear, so we shall need to distinguish between the left and right derivatives, defined as follows
(14) 
For supported on , we say that is left (resp. right) differentiable if exists for all (resp. exists for all ). We now state the fundamental derivative computation which underpins the results to follow:
Lemma 5.3.
Let denote the vector such that , and for . Then is continuous, and has left and right derivatives
(15) 
The above lemma is proved in Appendix A.3. Moreover, Lemma 5.3 implies that the outcome curve is concave under the assumption that is monotone:
Proposition 5.3.
Let be a distribution over states. Then is concave. In fact, if is any nondecreasing map from , is concave.
Proof.
Recall that a univariate function is concave (and finite) on if and only (a) is left and rightdifferentiable, (b) for all , and (c) for any , .
Observe that . By Lemma 5.3, has right and left derivatives and . Hence, we have that
(16) 
Using the fact that is monotone, and that , we see that , and that and are nonincreasing, from which it follows that is concave. The general concavity result holds by replacing with . ∎
6 Proofs of Main Theorems
We are now ready to present and prove theorems that characterize the selection rates under fairness constraints, namely and . These characterizations are crucial for proving the results in Section 3. Our computations also generalize readily to other linear constraints, in a way that will become clear in Section 6.2.
Comments
There are no comments yet.