Optimal Decision Making Under Strategic Behavior

05/22/2019 ∙ by Moein Khajehnejad, et al. ∙ Max Planck Society Max Planck Institute for Software Systems 5

We are witnessing an increasing use of data-driven predictive models to inform decisions. As decisions have implications for individuals and society, there is increasing pressure on decision makers to be transparent about their decision policies, models, and the features they use. At the same time, individuals may use knowledge, gained by transparency, to invest effort strategically in order to maximize their chances of receiving a beneficial decision. In this paper, our goal is to find decision policies that are optimal in terms of utility in such a strategic setting. To this end, we first use the theory of optimal transport to characterize how strategic investment of effort by individuals leads to a change in the feature distribution at a population level. Then, we show that, in contrast with the non-strategic setting, optimal decision policies are stochastic, and we cannot expect to find them in polynomial time. Finally, we derive an efficient greedy algorithm that is guaranteed to find locally optimal decision policies in polynomial time. Experiments on synthetic and real lending data illustrate our theoretical findings and show that the decision policies found by our greedy algorithm achieve higher utility than deterministic threshold rules, which are optimal policies in a non-strategic setting.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consequential decisions across a wide variety of domains, from hiring and banking to the judiciary, are increasingly informed by data-driven predictive models. In all these domains, the decision maker aims to employ a decision policy that maximizes a given utility function while the predictive model aims to provide an accurate prediction of the outcome of the process from a set of observable features. For example, in loan decisions, a bank may decide whether or not to offer a loan to an applicant on the basis of a predictive model’s estimate of the probability that the individual would repay the loan.

In this context, there is an increasing pressure on the decision makers to be transparent about the decision policies, the predictive models, and the features they use. However, individuals are incentivized to use this knowledge to invest effort strategically in order to receive a beneficial decision. With this motivation, there has been a recent flurry of work on strategic classification bruckner2012static ; bruckner2011stackelberg ; dalvi2004adversarial ; dong2018strategic ; hardt2016strategic ; hu2019disparate ; milli2018social . This line of work has focused on the development of accurate predictive models and it has shown that, under certain technical conditions, it is possible to protect predictive models against misclassification errors that would have resulted from this strategic behavior. In this work, rather than accurate predictive models, we pursue the development of decision policies that maximize utility in this strategic setting. The work most closely related to ours is by Kleinberg and Raghavan kleinberg2018classifiers

, which also considers the design of decision policies in a strategic setting, however, their problem formulation and assumptions are fundamentally different and their technical contributions are orthogonal to ours. More broadly, our work also relates to several recent studies on the long-term consequences of machine learning algorithms 

hu2018short ; liu2018delayed ; mouzannar2019fair ; Tabibian2019 and recommender systems schnabel2018short ; sinha2016deconvolving .

Once we focus on the utility of a decision policy, it is overly pessimistic to always view an individual’s strategic effort as some form of gaming, and thus undesirable—an individual’s effort in changing their features may actually lead sometimes to self-improvement, as noted by several studies in the economic literature coate1993will ; fryer2013valuing ; hu2018short and, more recently, in the theoretical computer science literature kleinberg2018classifiers . For example, in car insurance decisions, if an insurance company uses the number of speeding tickets a driver receives to decide how much to charge the driver, she may feel compelled to drive more carefully to pay a lower price, and this will likely make her a better driver. In loan decisions, if a bank uses credit card debt to decide about the interest rate it offers to a customer, she may feel compelled to avoid credit card debt overall to pay less interest, and this will improve her financial situation. In hiring decisions, if a law firm uses the number of internships to decide whether to offer a job to an applicant, she may feel compelled to do more internships during her studies to increase their chances of getting hired, and this will improve her job performance. In all these scenarios, the decision maker—insurance company, bank, or law firm—would like to find a decision policy that incentivize individuals to invest in forms of effort that increase the utility of the policy—reduce payouts, default rates, or increase job performance.

We cast the above problem as a Stackelberg game in which the decision maker moves first and shares her decision policy before individuals best-respond and invest effort to maximize their chances of receiving a beneficial decision under the policy. Then, we characterize how this strategic investment of effort leads to a change in the feature distribution at a population level. More specifically, we derive an analytical expression for the feature distribution induced by any policy in terms of the original feature distribution by solving an optimal transport problem villani2008optimal . Based on this analytical expression, we make the following contributions:

  • [noitemsep,nolistsep,leftmargin=0.8cm]

  • We show that the optimal decision policies are stochastic. This is in contrast with the non-strategic setting where deterministic threshold rules are optimal corbett2017algorithmic ; valera2018enhancing .

  • We show that the problem of finding the optimal decision policies is NP-hard by using a novel reduction to the Boolean satisfiability (SAT) problem karp1972reducibility .

  • We introduce an efficient greedy algorithm (refer to Algorithm 1

    ) that is guaranteed to find locally optimal decision policies in polynomial time by solving a sequence of linear programs.

Finally, we perform a variety of experiments using synthetic and real lending data to illustrate the above theoretical findings and show that the decision policies found by our greedy algorithm achieve higher utility than deterministic threshold rules111We will release an open-source implementation of our greedy algorithm with the final version of the paper..

2 Decision policies, utilities, and individual benefits

Given an individual with a feature vector

and a (ground-truth) label , a decision controls whether the label is realized. As an example, in a loan decision, the decision specifies whether the individual receives a loan () or her application is rejected (); the label indicates whether an individual repays the loan () or defaults () upon receiving it; and the feature vector may include an individual’s salary, education, or credit history222For simplicity, we assume features are discrete and, without loss of generality, we assume each feature takes discrete values..

Each decision is sampled from a decision policy and, for each individual, the labels are sampled from . Moreover, we adopt a Stackelberg game-theoretic formulation in which the decision maker publishes her decision policy before individuals (best-)respond. As it will become clearer in the next section, individual best responses lead to a change in the feature distribution at a population level—we will say that the new feature distribution is induced by the policy . Then, we measure the (immediate) utility a decision maker obtains using a policy as the average overall profit she obtains corbett2017algorithmic ; kilbertus2019improving ; valera2018enhancing , ,

(1)

where is a given constant reflecting economic considerations of the decision maker. For example, in a loan scenario, the first term is proportional to the expected number of individuals who receive and repay a loan, the second term is proportional to the number of individuals who receive a loan, and measure the cost of offering a loan in units of repaid loans. Finally, we define the (immediate) individual benefit an individual with features obtains as the probability that she receives a beneficial decision, ,

(2)

where the function is problem dependent. For example, in a loan scenario, one may define and thus the benefit be proportional to the probability that she receives a loan.

3 Problem Formulation

Similarly as in most previous work in strategic classification bruckner2011stackelberg ; dalvi2004adversarial ; dong2018strategic ; hardt2016strategic ; hu2019disparate , we consider a Stackelberg game in which the decision maker moves first and shares her decision policy before individuals best-respond333In previous work, the predictive model, rather than the decision policy, is what the decision maker shares.. Moreover, we assume every individual is rational and aims to maximize her individual benefit. Then, our goal is to find the optimal policy that maximizes utility, as defined in Eq. 2, ,

(3)

under the assumption that each individual best responds. For each individual, her best response is to change from her initial set of features to a set of features

(4)

where is the cost she pays for changing from to , with 444Note that, in contrast with previous work, we do not make any additional assumption regarding the properties of the cost.. At a population level, this best response results into a transportation of mass between the original distribution and the induced distribution, , from to . Thus, we can readily derive an analytical expression for the induced feature distribution in terms of the original feature distribution:

(5)

Note that the transportation of mass between the original and the induced feature distribution has a natural interpretation in terms of optimal transport theory villani2008optimal . More specifically, the induced feature distribution is given by , where denotes the flow between and and it is the solution to the following optimal transport problem:

(6)

Finally, we can combine Eqs. 3-5 and rewrite our goal as follows:

(7)

where note that, by definition, for all and, in practice, the distribution and the conditional distribution may be approximated using models trained on historical data.

4 Optimal Decision Policies are Stochastic and Hard to Find

In this section, we first show that, in contrast with the non-strategic setting, optimal decision policies that maximize utility in a strategic setting are stochastic. Then, we demonstrate that we cannot expect to find these optimal decision policies in polynomial time.

Optimal policies in a strategic setting are stochastic. In a non-strategic setting where , it has been shown that, under perfect knowledge of the conditional distribution , the optimal policy that maximizes utility is a simple deterministic threshold rule corbett2017algorithmic ; valera2018enhancing , ,

(8)

This has lent support to focusing on deterministic threshold policies and has seemingly justified using predictions and decisions interchangeably. However, in a strategic setting, there are many instances in which that does not hold true. For example, assume with ,

In the non-strategic setting, the optimal policy is clearly , and . However, in the strategic setting, a brute force search reveals that the optimal policy is given by , and , and it induces a transportation of mass from to . Here, note that the optimal policy in the strategic setting achieves a higher utility than its counterpart in the non-strategic setting. Unfortunately, we will now show that any algorithm that finds the optimal policy in a strategic setting, including brute force, will have exponential complexity unless .

Hardness results. Our main result is the following Theorem, which shows that we cannot expect to find the optimal policy that maximizes utility in polynomial time: The problem of finding the optimal decision that maximizes utility in a strategic setting is NP-hard.

Proof.

Without loss of generality, assume each individual has one single feature , which can take values. First, we start by representing the problem using a directed weighted bipartite graph, whose nodes can be divided into two disjoint sets and . In each of these sets, there are nodes with labels . Moreover, we characterize each node in with and each node in with and . Then, we connect each node in to each node in and set each edge weight to . Now, for each node in , only the edge with maximum weight will have nonzero utility, ,

Under this representation, the problem reduces to finding the values of such that the sum of the utilities of all edges in the graph is maximized.

Next, we will reduce the problem to the SAT problem karp1972reducibility , which is known to be NP-complete. In a SAT problem, the goal is finding the value of a set of boolean variables , and their logical complements , that satisfy number of OR clauses, which we label as . More specifically, we start by introducing another directed weighted bipartite graph, whose nodes can be also divided into two disjoint sets and . The set contains nodes with labels

and the set contains nodes with labels

For the set , we characterize each node with , where

for all and . For the set , we characterize each node with and , where

for all . Then, we connect each node in to each node in and set each edge weights to , where:

  • [noitemsep,nolistsep,leftmargin=0.8cm]

  • and for each and .

  • , and for each and .

  • , and for each and .

  • if the clause contains , if the clause contains , and for all and .

Similarly as before, for each node in , only the edge with the maximum weight will have nonzero utility, ,

Now, note that, under the above definition of utilities and costs, finding the optimal values of such that the sum of the utilities of all edges in the graph is maximized reduces to first solving independent problems, one per pair and , since whenever , the edge will never be active, and each optimal value will be always either zero or one. Moreover, the maximum utility due to the nodes will be always smaller than the utility due to and

and we can exclude them by the moment. In the following, we fix

and compute the sum of utilities for all possible values of and :

  • [noitemsep,nolistsep,leftmargin=0.8cm]

  • For , the maximum sum of utilities is whenever .

  • For , the sum of utilities is for any value of and .

  • For , the maximum sum of utilities is .

Therefore, the maximum sum of utilities occurs whenever for all . Finally, to find the actual values of and that maximize the overall utility, including the utility due to the nodes , we will need to solve the SAT problem with and . This concludes the proof. ∎

5 A Greedy Algorithm with Local Guarantees

In this section, we first introduce an efficient greedy algorithm to approximate the optimal decision policy that maximizes utility under the assumption that the individuals best respond, given by Eq. 7. Then, we prove that this greedy algorithm is guaranteed to terminate and find locally optimal decision policies. Finally, we propose a variation of the greedy algorithm that is amenable to parallelization (but does not enjoy theoretical guarantees).

A greedy algorithm. The greedy algorithm is based on the following key insight: fix the decision policy for all feature values except . Then, Eq. 7 becomes a linear program in , which can be solved efficiently using well-known techniques.

1:Constant , distribution , and cost
2:
3:
4:
5:for  do
6:     
7:     
8:end for
9:return
Algorithm 1 GreedyPolicy: It approximates the optimal decision policy that maximize utility under the assumption that the individuals best respond.

Exploiting this insight, the greedy algorithm proceeds iteratively over feature values and, at each iteration, it optimizes the decision policy for one feature value while fixing the decision policy for all other values. Algorithm 1 summarizes the greedy algorithm. Within the algorithm, InitializePolicy initializes the decision policy to for all , denotes the -th feature value in terms of utility , Solve finds the best policy for given for all , and . Note that we proceed over feature values according to their utility value because, in practice, we have observed that such ordering improves performance. However, our theoretical results do not depend on such ordering.

to 1 X[c] X[c] X[c] X[c] & & &
a) & c) & e) & g)
& & &
b) & d) & f) & h)

Figure 1: Optimal decision policies and induced feature distributions. Panels (a) and (b) visualize and , respectively. Panels (c-h) visualize and for different values of the parameter , which control the cost to change feature values . In all panels, each cell corresponds to a different feature value and darker colors correspond to higher values.

Theoretical guarantees of the greedy algorithm. We start our theoretical analysis by showing that, at each iteration, Algorithm 1 is guaranteed to find a better policy: At each iteration, Algorithm 1 finds a better policy , , .

Proof.

It readily follows from the fact that the linear program always returns a better policy and by definition, at the end of each iteration, . ∎

Moreover, the following Proposition shows that Algorithm 1 is guaranteed to terminate: Algorithm 1 terminates after at most iterations, where is the common denominator of all elements in the set 555The common denominator satisfies that . Such exists if and only if is rational ..

Proof.

We prove that is a denominator of after each update in the greedy algorithm. We prove this claim by induction. The induction basis is obvious as we initialize the values of for all . For the induction step, suppose that we are going to update in our greedy algorithm. According to the induction hypothesis we know that . Then, it can be shown that the new value of will be chosen among the elements of the following set (these are the thresholds that might change the transfer of masses):

(9)

In the above, it is clear that all these possible values are divisible by , so the new value of will be divisible by too. Then, since and for all , there are possible values for each , , . As a result, there are different decision policies . Finally, since the total utility increases after each iteration, as shown in Proposition 1, the decision policy at each iteration must be different. As a result, the algorithm will terminate after at most iteration. ∎

Finally, as a direct consequence of both propositions, we can conclude that Algorithm 1 finds locally optimal decision policies.

A parallel greedy algorithm. Whenever and are large, the greedy algorithm may still suffer from scalability problems. In those cases, we can substitute line 5 in Algorithm 1 by to be able to solve linear programs, one per feature value, in parallel. While the resulting algorithm does not enjoy theoretical guarantees, it achieves comparable performance in terms of utility, as shown in Figure 2.

6 Experiments on Synthetic Data

(a) Utility vs.
(b) Utility vs. # feature values
(c) Runtime vs. # feature values
Figure 2: Performance and running time using synthetic data. Panels (a) and (b) shows the utility achieved by four different decision policies against the parameter and number of feature values , respectively. Panel (c) shows the running time of the greedy algorithm, the parallel greedy algorithm and brute force search. In Panel (a), we set and, in Panels (b) and (c), we set .

Structure of decision policies and induced distributions. In this section, we look at a particular configuration, however, we found qualitative similar results across many different configurations. More specifically, we consider -dimensional features and compute , shown in Figure 1

(a), by discretizing a two-dimensional Gaussian mixture model

, where , , , and , into a grid with and . Moreover, for simplicity, we rescale and so that . Then, we compute , shown in Figure 1(b), using the expression . Finally, we set , where is a given parameter666The larger the value of , the easier it becomes for an individual to change features, and .

Given the above configuration, we obtain a locally optimal decision policy using Algorithm 1 and, given this policy, compute its induced distribution using Eq. 5. Figure 1(c-h) summarize the results for several values of , which show that, as the cost of moving to further feature values for individuals decrease, the decision policy only provides positive decisions for a few values with high , encouraging individuals to move to those values. Appendix A.1 contains another example showing a qualitatively similar behavior.

Performance evaluation. We compare the utility achieved by the decision policy found by the greedy algorithm against the utility achieved by: (i) the decision policy found by the parallel greedy algorithm; (ii) the optimal decision policy in a non-strategic setting; (iii) and, a deterministic decision policy obtained by thresholding the decision policy found by our algorithm, , , where is the decision policy found by our algorithm. For the parallel greedy algorithm, we run it using parallel threads. Here, for simplicity, we consider a unidimensional features with discrete values and compute , where

is sampled from a Gaussian distribution

truncated from below at zero. Then, we sample , sample the cost between feature values for a fraction of all pairs and set for the remaining pairs, and set .

Figures 2(a,b) summarize the results for several sparsity levels and number of feature values , where we repeat each experiment times to obtain error bars. We find that, in comparison with the decision policies designed for strategic settings, the optimal decision policy in a non-strategic setting achieves an underwhelming performance.

Running time. We compare the running time of the greedy algorithm, the parallel greedy algorithm and brute force search777We ran all experiments on a machine equipped with 48 Intel(R) Xeon(R) 3.00GHz CPU cores and 1.2TB memory.. We consider the same configuration as in the performance evaluation and an increasing number of feature values . Figure 2(c) summarizes the results, which show that: (i) brute force search quickly becomes computationally intractable and (ii) the greedy algorithm is faster than the parallel greedy algorithm for a small number of feature values, due to its lower number of iterations until termination, however, the parallel greedy algorithm becomes more scalable for large number of feature values. Appendix A.2 shows the number of iterations the greedy algorithm and the parallel greedy algorithm take to terminate.

7 Experiments on Real Data

(a) Utility vs.
(b) Transp. of mass vs.
(c) and
Figure 3: Results on LendingClub data. Panel (a) shows the utility achieved by four different decision policies. Panel (b) shows the transportation of mass in the feature distribution induced by the decision policy found by the greedy algorithm. Panel (c) shows the feature distribution and the distribution induced by the decision policy found by the greedy algorithm for three different values of .

Experimental setup. We use a publicly available lending dataset888{https://www.kaggle.com/wordsforthewise/lending-club/version/3}, which comprises of information about all accepted and rejected loan applications in LendingClub from 2007 and 2018. For each application, the dataset contains various demographic features about the applicant. In addition, for each accepted application, it contains the current loan status (, Current, Late, Fully Paid), the latest payment information, and the FICO scores.

In our experiment, we first use the information about accepted applications to train a decision tree classifier (DTC) with

leaves. This classifier predicts whether an applicant fully pays a loan () or defaults/has a charged-off debt () on the basis of a set of raw features, , the loan amount, employment length, state, debt to income ratio, zip code and credit score (Appendix B provides more information about the raw features). The classifier achieves a % accuracy, as estimated using -fold cross-validation. Then, for each (accepted or rejected) application, we set its unidimensional feature to be the leaf of the DTC where the application is mapped into, given the raw features, and we approximate the conditional probability using the prediction of the DTC for the corresponding leaf. Then, we compute the cost between feature values by comparing the raw features of the applications mapped to each leaf of the DTC, where we apply a scaling factor similarly as in the experiments on synthetic data (Appendix B.1 provides more information on the computation of ). Finally, we compute the cost of giving a loan to the -th percentile of the value for applicants who defaults/has a charged-off debt.

Results. Figure 3(a) summarizes the results. As increases and the cost of moving to further values for individuals decreases, we find that: (i) the decision policies found by the greedy and parallel greedy algorithms outperform the deterministic policy derived from the decision policy found by the greedy algorithm and the optimal policy in a non-strategic setting by large margins (Panel (a)); (ii) there is a higher transportation of mass between the original feature distribution and the distribution induced by the decision policy found by the greedy algorithm (Panel (b)); and, (iii) the probability mass in becomes more concentrated (Panel (c)).

8 Conclusions

In this paper, we have studied the problem of finding optimal decision policies that maximize utility in a strategic setting. We have shown that, in contrast with the non-strategic setting, optimal decision policies that maximize utility are stochastic and hard to find. Moreover, we have proposed an efficient greedy algorithm that is guaranteed to find locally optimal decision policies in polynomial time and demonstrated its efficacy using both synthetic and real data.

Our work opens up many interesting avenues for future work. Our greedy algorithm enjoys local performance guarantees. It would be interesting to develop algorithms with global performance guarantees. Moreover, we have assumed that features take discrete values. It would be very interesting to extend our work to real valued features. Our problem formulation considers policies that maximize utility. A natural step would be considering utility maximization under fairness constraints hardt2016equality ; zafar2017fairness . In our work, the individuals have white-box access to the decision policy, however, in practice, they may only have access to explanations

of specific outcomes. Finally, there are reasons to believe that causal features should be more robust to strategic behavior. It would be interesting to investigate the use of causally aware feature selection methods 

rojas2018invariant in strategic settings.

References

  • [1] M. Brückner, C. Kanzow, and T. Scheffer. Static prediction games for adversarial learning problems. Journal of Machine Learning Research, 13(Sep):2617–2654, 2012.
  • [2] M. Brückner and T. Scheffer. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011.
  • [3] S. Coate and G. C. Loury. Will affirmative-action policies eliminate negative stereotypes? The American Economic Review, pages 1220–1240, 1993.
  • [4] S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic decision making and the cost of fairness. KDD, 2017.
  • [5] N. Dalvi, P. Domingos, S. Sanghai, D. Verma, et al. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 99–108. ACM, 2004.
  • [6] J. Dong, A. Roth, Z. Schutzman, B. Waggoner, and Z. S. Wu. Strategic classification from revealed preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 55–70. ACM, 2018.
  • [7] R. G. Fryer Jr and G. C. Loury. Valuing diversity. Journal of Political Economy, 121(4):747–774, 2013.
  • [8] M. Hardt, N. Megiddo, C. Papadimitriou, and M. Wootters. Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science, 2016.
  • [9] M. Hardt, E. Price, N. Srebro, et al.

    Equality of opportunity in supervised learning.

    In Advances in neural information processing systems, pages 3315–3323, 2016.
  • [10] L. Hu and Y. Chen. A short-term intervention for long-term fairness in the labor market. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1389–1398. International World Wide Web Conferences Steering Committee, 2018.
  • [11] L. Hu, N. Immorlica, and J. W. Vaughan. The disparate effects of strategic manipulation. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019.
  • [12] R. M. Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer, 1972.
  • [13] N. Kilbertus, M. Gomez-Rodriguez, B. Schölkopf, K. Muandet, and I. Valera. Improving consequential decision making under imperfect predictions. arXiv preprint arXiv:1902.02979, 2019.
  • [14] J. Kleinberg and M. Raghavan. How do classifiers induce agents to invest effort strategically? arXiv preprint arXiv:1807.05307, 2018.
  • [15] L. T. Liu, S. Dean, E. Rolf, M. Simchowitz, and M. Hardt. Delayed impact of fair machine learning. In Advances in Neural Information Processing Systems, 2018.
  • [16] S. Milli, J. Miller, A. D. Dragan, and M. Hardt. The social cost of strategic classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019.
  • [17] H. Mouzannar, M. I. Ohannessian, and N. Srebro. From fair decision making to social equality. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 359–368. ACM, 2019.
  • [18] M. Rojas-Carulla, B. Schölkopf, R. Turner, and J. Peters.

    Invariant models for causal transfer learning.

    The Journal of Machine Learning Research, 19(1):1309–1342, 2018.
  • [19] T. Schnabel, P. N. Bennett, S. T. Dumais, and T. Joachims. Short-term satisfaction and long-term coverage: Understanding how users tolerate algorithmic exploration. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 513–521. ACM, 2018.
  • [20] A. Sinha, D. F. Gleich, and K. Ramani. Deconvolving feedback loops in recommender systems. In Advances in Neural Information Processing Systems, pages 3243–3251, 2016.
  • [21] B. Tabibian, V. Gomez, A. De, B. Schoelkopf, and M. Gomez-Rodriguez. Consequential ranking algorithms and long-term welfare. arXiv preprint arXiv:1905.05305, 2019.
  • [22] I. Valera, A. Singla, and M. Gomez-Rodriguez. Enhancing the accuracy and fairness of human decision making. In Advances in Neural Information Processing Systems, 2018.
  • [23] C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
  • [24] M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pages 1171–1180, 2017.

Appendix A Additional experiments on synthetic data

a.1 Structure of decision policies and induced distributions

We consider -dimensional features, define the cost between feature values and as , where is a given parameter999The larger the value of , the easier it becomes for an individual to change features, and set . Moreover, to compute , shown in Figure 4(a), we discretize a two-dimensional Gaussian distribution , where and into a grid with and and, for simplicity, rescale and so that . To compute , shown in Figure 4(b), we use the expression . Here, note that, in contrast with the example in Figure 1 in the main paper, is unimodal and is bimodal.

Given the above configuration, we obtain a locally optimal decision policy using Algorithm 1 and, given this policy, compute its induced distribution using Eq. 5. Figure 4(c-h) summarize the results for several values of , which are in qualitative agreement with the example shown in Figure 1 in the main paper—as the cost of moving to further feature values for individuals decrease, the decision policy only provides positive decisions for a few values with high , encouraging individuals to move to those values.

to 1 X[c] X[c] X[c] X[c] & & &
a) & c) & e) & g)
& & &
b) & d) & f) & h)

Figure 4: Optimal decision policies and induced feature distributions. Panels (a) and (b) visualize and , respectively. Panels (c-h) visualize and for different values of the parameter , which control the cost to change feature values . In all panels, each cell corresponds to a different feature value and darker colors correspond to higher values.

a.2 Number of iterations to termination

Figure 5 shows the number of iterations the greedy algorithm and the parallel greedy algorithm take to terminate against the number of feature values , where we consider the same configuration as in the running time evaluation. Here, note that the parallel greedy algorithm does not enjoy the same theoretical guarantees as the greedy algorithm and thus it may not always converge. In our experiments, we set a limit of iterations for the parallel greedy algorithm and, in fact, we do reach that limit sometimes. Finally, although the parallel greedy algorithm takes more iterations to terminate, thanks to parallelization, it is more scalable for large number of feature values.

Figure 5: Number of iterations the greedy algorithm and the parallel greedy algorithm take to terminate against the number of feature values

Appendix B Additional details on the experiments on real data

b.1 Computation of the cost between feature values

To compute the cost between feature values , we compare the raw features (refer to Appendix B.2) of the applications mapped to each leave of the decision tree classifier (DTC). More specifically, for each leaf of the DTC, we consider the concatenation of the state code and partial zip code data (which we called state-zip), the debt to income ratio and the employment length of each applicant mapped into the leaf . Then, given a pair of leafs and , we compute the cost as the sum of three terms:

  • [noitemsep,nolistsep,leftmargin=0.8cm]

  • The ratio of unique values of state-zip for applicants mapped into leaf that no applicant mapped into leaf has. This term is weighted by .

  • The difference between the average debt to income ratios of applicants mapped into leaf and leaf , where we set all negative differences to zero, we normalize the resulting differences by the -th percentile of debt to income values, and we cap the values to be equal or less than . This follows the intuition that it is more costly to move from a leaf with a higher value of average debt to income ratio to a leaf with a lower value. This term is weighted by .

  • The difference between the average employment lengths of applicants mapped into leaf and leaf , where we set all negative differences to zero, we normalize the resulting differences to lie between and . This follows the intuition that it is more costly to move from a leaf with a lower value of employment length to a leaf with a higher value. This term is weighted by .

b.2 Raw features

The decision tree classifier (DTC) predicts whether an applicant fully pays a loan () or defaults/charged off () based on the following raw features:

  • [noitemsep,nolistsep,leftmargin=0.8cm]

  • Loan Amount: The amount that the applicant initially requested.

  • Employment Length: How long the applicant has been employed.

  • State: The state where the applicant lives inside the United States of America.

  • Debt to Income Ratio: The ratio between the applicant’s financial debts and her average income.

  • Zip Code: The zip code of the applicant’s residential address.

  • FICO Score: The applicant’s FICO score, which is a credit score based on consumer credit files. The FICO scores are in the range of 300-850 and the average of the high and low range for the FICO score of each applicant has been used for this study.