Uncovering the Source of Machine Bias

01/09/2022
by   Xiyang Hu, et al.
Carnegie Mellon University
8

We develop a structural econometric model to capture the decision dynamics of human evaluators on an online micro-lending platform, and estimate the model parameters using a real-world dataset. We find two types of biases in gender, preference-based bias and belief-based bias, are present in human evaluators' decisions. Both types of biases are in favor of female applicants. Through counterfactual simulations, we quantify the effect of gender bias on loan granting outcomes and the welfare of the company and the borrowers. Our results imply that both the existence of the preference-based bias and that of the belief-based bias reduce the company's profits. When the preference-based bias is removed, the company earns more profits. When the belief-based bias is removed, the company's profits also increase. Both increases result from raising the approval probability for borrowers, especially male borrowers, who eventually pay back loans. For borrowers, the elimination of either bias decreases the gender gap of the true positive rates in the credit risk evaluation. We also train machine learning algorithms on both the real-world data and the data from the counterfactual simulations. We compare the decisions made by those algorithms to see how evaluators' biases are inherited by the algorithms and reflected in machine-based decisions. We find that machine learning algorithms can mitigate both the preference-based bias and the belief-based bias.

READ FULL TEXT VIEW PDF

Authors

page 5

page 9

04/06/2021

On the Basis of Sex: A Review of Gender Bias in Machine Learning Applications

Machine Learning models have been deployed across almost every aspect of...
10/28/2020

Raw Audio for Depression Detection Can Be More Robust Against Gender Imbalance than Mel-Spectrogram Features

Depression is a large-scale mental health problem and a challenging area...
01/27/2019

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

We present a large-scale study of gender bias in occupation classificati...
01/07/2020

Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples

AI algorithms are not immune to biases. Traditionally, non-experts have ...
11/10/2019

Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation

Models often easily learn biases present in the training data, and their...
06/17/2022

Reviewer Preferences and Gender Disparities in Aesthetic Judgments

Aesthetic preferences are considered highly subjective resulting in inhe...
10/30/2021

Identifying and mitigating bias in algorithms used to manage patients in a pandemic

Numerous COVID-19 clinical decision support systems have been developed....
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Humans and organizations often need to make decisions under imperfect information, and they generally rely on certain statistics that quantify the likelihood of different outcomes (Corbett-Davies and Goel, 2018). However, in practice these statistics generally cannot perfectly predict the outcome of the event, and often involve human inputs, which may contain bias against certain demographics (often referred to as protected groups) such as minorities or females. As a result, members in these demographic groups are often treated unfairly, which leads to various social and economic problems.

A typical context of decision making under imperfect information is the loan approval decision in micro-lending. The emerging micro-lending business provides faster and convenient access to financial resources to more people with a streamlined loan application process (Mateescu, 2015). The application process does not demand visits to a physical institution, and the credit assessment process involves human decisions (Lu et al., 2012). That is, human evaluators (i.e. platform staffs or lenders) make credit risk assessment with individual information (e.g., demographics) which is collected from loan applicants. Based on the evaluated credit risk, evaluators make the final loan approval decisions. However, this practice may contain bias due to limited human cognitive capacity to process complex computation for credit risk evaluation (Icard, 2018). Theoretically, bias in credit risk evaluation would heavily attenuate the evaluation accuracy, profitability of micro-lending platform or lenders, and cause social unfairness (Fu et al., 2019). Unfortunately, the influence of human bias is largely neglected in literature. Since human evaluations are inevitable in many contexts such as micro-lending, the first goal of this study is to investigate: Does bias exist in human evaluators’ loan evaluation; if so, how does this bias affect their decisions and the market outcome?

Particularly, over the past decades, economists and social scientists have proposed different models to explain and quantify human bias (Bohren et al., 2019; Arnold et al., 2018; Gneezy et al., 2012; Parsons et al., 2011; Bertrand et al., 2005; Biernat and Manis, 1994)

. They classify human bias into two broad categories:

preference-based bias (or tasted-based bias) and belief-based bias (Bohren et al., 2019). Preference-based bias arises when evaluators have animus towards a particular group, while belief-based bias arises when evaluators’ subjective beliefs about a group lead them to less favorably treat individuals from the group than members from other groups with the same observed performance. The classification of the two types of human bias is critical. It allows us to dynamically trace the evolution of bias during long-term (i.e. repeated) decisions rather than to regard bias as static. This static assumption of bias may overshadow the potentials and value of learning behaviors in decision making (Zhang and Angela, 2013). It also enables us to unravel the influence of different types of human bias on decision outcomes.

Therefore, following such a classification of the source of the decision bias, in this study, we disentangle and quantify these two types of human bias in observable data from a micro-lending platform. In specific, we develop a structural econometric model of human evaluators’ loan approval decisions, which captures the underlying economic processes that drive human evaluators’ decisions. We then estimate the structural model based on the real-world data from a micro-lending platform that involves sufficient samples of repeated loan applications and approval decisions by the platform staff (i.e., human evaluators). Some of the parameters in the model capture the two possible types of human biases, and therefore, their parameter estimates can reveal whether human evaluators’ decisions exhibit those two types of bias. We compare a number of alternative model specifications and identify the one that best explain the observed human decisions in the data. We also conduct a survey of human evaluators to confirm our chosen model specification. Using the estimated structural model, we conduct policy simulations to quantify the effects of these human bias and their welfare implications.

Specifically, to measure the extent of bias, we examine the gender gaps for the approval true positive rate (TPR). We adopt the concept of equal opportunity, one of the most popular fairness notions. Equal opportunity suggests that qualified individuals, no matter what their sensitive attributes are, have equal opportunity to receive favorable outcomes (Hardt et al., 2016). In our loan application setting, this means two gender group should have the same true positive rates if no gender bias exists.

Furthermore, with the recent boom of AI technologies, many organizations have increasingly adopted machine learning algorithms to replace human in decision making for better efficiency and decision quality. 111The micro-lending platform we work with is also considering to use a machine learning algorithm to evaluate loan applications. The original thought was algorithms are “objective” and therefore have the potential to eliminate the discrimination against members in certain disadvantaged groups. However, numerous studies have cautioned that machine learning algorithms may also present non-negligible bias against those groups.

One potential source of algorithm bias lies in some objective and inherent characteristics of data themselves. This type of cause has been well studied (Lu et al., 2019; Fuster et al., 2020; Kleinberg et al., 2018a; Skeem and Lowenkamp, 2016). For example, the correlations between the same features and the outcome may be different for different groups, and/or the data records of the disadvantaged group may be inferior in both quantity and quality than the regular group (the group that are not discriminated against historically). Another cause of algorithmic bias is the biased or flawed training data generated by humans. That is, human in the first place generates bias in the training data, which then translates into machine-learning assisted decision making (Fu et al., 2021; Corbett-Davies and Goel, 2018; Tolan et al., 2019; Hardt et al., 2016).

Therefore, in the second part of this paper, we intend to address the following question: To what extent a machine learning algorithm trained on historical data inherit human bias? To this end, we train various machines based on (a) data from human evaluators’ decisions (the approved loans observed in the data) and (b) data from our counterfactual simulations with human biases removed. In the counterfactual simulation, we take advantage of an experiment the platform conducted: during a period of time, all applicants are approved without any selection. This novel experiment allows us to observe the credit behaviors of all the applicants on the micro-lending platform which are usually unobservable.This allows us to infer the outcomes of the loans that are not approved by human evaluators, which usually go unobserved and are impossible to evaluate. By leveraging these ”de-biased” data sets, we are able to compare machines trained based on data with and without human bias and demonstrate how different types of human bias, as well as other data characteristics, affect decisions made by machines.

We find that the evaluators exhibit both types of biases discussed above, with both preference-based bias and belief-based bias in favor of females. This is due to females generally behave better than males in repaying loans, and study has shown that females are more risk averse and ethically sensitive (Cumming et al., 2015). Although preference-based bias persists, belief-based bias gradually reduces as human evaluators learn from the repayment behaviors of each specific borrower. Our policy simulations suggest that the elimination of either the preference-based bias or the belief-based bias from human evaluators’ decisions can increase the platform’s profits. The mechanisms behind the two scenarios are the same. The extra profits result from lowering the approval probability of defaulters, especially female defaulters. On the borrower end, the elimination of the two types of bias can mitigate the gender gap in the credit evaluation measured by the true positive rate . And when both types of bias are removed at the same time, the gender gap is the minimized. When we further feed the real-world data and the counterfactual data into machine learning models, we find that eliminating upstream human decision biases help combat algorithmic biases.

The theoretical contribution of this study is multi-fold. First, while a great body of machine bias literature has focused on the bias generated in algorithm process (Suresh and Guttag 2020), our study is among the first to investigate the influence of human bias on AI-assisted decisions. Second, we introduce two types of human biases into a real-world decision-making context, and we are among the first to empirically identify and quantify these two human biases with a large-scale dataset and a structural econometric model. Furthermore, we also add to the micro-lending and FinTech literature by revealing how human bias could influence platforms’ profitability and service equality. We also characterize how human evaluators make credit risk evaluations and loan approval decisions in practice. Methodologically, we design a novel and comprehensive empirical framework to uncover the behavioral sources of bias, which are usually implicit and difficult to identify accurately. The framework works on secondary datasets, and enables to conduct counterfactual simulations, and AI-based analyses.

2. Relevant Literature

Our work is closely related to the literature on the bias and discrimination in human decision making, especially bias in financial evaluations. Two categories of human decision biases have been extensively studied; one is ethnic or racial discrimination, another is gender discrimination. It has been documented that in financial loan market, both non-Fintech and Fintech lenders tend to discriminate against ethnic-minority borrowers (higher interest rates and lower probability of being funded) through liability document and facial bias (Sydnor and Pope, 2011; Bartlett et al., 2019). It has also been that women are more credit constrained than men by microfinance institutions (Blanchflower et al., 2003). In P2P lending, female borrowers need pay higher interest rates (Alesina et al., 2013; Chen et al., 2017, 2020). And female founders are less successful attracting male investors compared to observably similar male founders (Ewens and Townsend, 2020).

In addition to race bias and gender bias, other kinds of bias or discrimination common to see in society include immigration and age related bias (Dobbie et al., 2018), occupational related bias (Cui, 2019), home bias (Lin and Viswanathan, 2016), etc. The major reasons behind these human decision bias lie in the minority applicants’ relative quality compared to the majority (Ferguson and Peters, 1995), the decision makers’ improper task objectives and incentives (Dobbie et al., 2018), and the inherent bias formation and evolution process of preference-based and belief-based evaluation biases (Gneezy et al., 2012; Bohren et al., 2019).

Economic theories classify inherent human bias into two types: preference-based bias and belief-based bias (Bohren et al., 2019). Preference-based bias arise when evaluators have animus towards a particular group, while belief-based bias arise when evaluators’ subjective beliefs about a group lead them to less favorably treat individuals from the group than members from the regular group with the same observed performance. Belief-based bias can be further classified into two subcategories: belief-based bias with misspecified/incorrect beliefs and belief-based bias with correct beliefs (sometimes referred to as statistical bias). The former occurs when the evaluators’ subjective beliefs about the group-level statistics of the protected group are not the same as the reality, and the latter occurs when the subjective beliefs match the reality. In a static setting, preference and belief-based biases mix up with each other, and it is hard to disentangle their effects on human decisions. However, in a dynamic setting, we are able to distinguish between the two biases. This is because across periods, preference-based bias persists, while belief-based bias can be mitigated or even reversed when the evaluator observes new signals about each individual they are evaluating. In this paper, we follow the definitions of the two types of bias. Our context of multi-period microloan borrowing and lending provides a great setting to identify these two types of bias.

Our paper also builds on the growing literature on algorithmic discrimination and machine learning bias. One important source of machine bias is the human decision bias encoded in the training dataset. (Fuster et al., 2020) incorporate the prediction of machine learning models into a simple equilibrium model of finance credit, and find that algorithms increase rate disparity among and within different racial groups. (Stevenson and Doleac, 2019) conduct simulations to evaluate the race and age disparities in finance risk assessment, and demonstrate that human and machine interaction can lead to bias in both race and age. (Lambrecht and Tucker, 2019) find that economic forces in the market can distort neutral algorithms into discriminating females in terms of their exposure to advertisements of STEM (science, technology, engineering and mathematics) jobs. Major reasons for algorithmic bias include lack of necessary data control (statistical bias) and unintended correlation with sensitive factors (Fu et al., 2019; Bartlett et al., 2019), training-sample bias (Cowgill et al., 2020), market mechanism (Lambrecht and Tucker, 2019), etc. Even though algorithms could lead to various bias issues, implementing appropriate designs and, regulations can make them positive forces for equity (Kleinberg et al., 2018b; Chouldechova, 2017; Rudin, 2019).

To deal with bias problems in human and machine decision making. researchers have come up with diverse methods. The most direct way is to obtain and include more useful data. (Kleinberg et al., 2018a) show direct inclusion of a protected variable (e.g., race) is useful for mitigating unfairness. (Lu et al., 2019) find that using proper alternative data could improve both financial profitability and equality. Aside from enriching the data, learning from repeated events can also mitigate bias. (Cai et al., 2016) conclude that based on signaling theory, evaluators/investors will leverage information from repeated borrowing of the same borrow in her lending history. (KIM, 2020) argues that borrowers’ past track record within the platform have the most important impact (than other demographic factors) on predicting the repayment performance of their current loans.

Another flourishing track to combat decision bias is to invent more transparent, delicate and de-biasing algorithms. It has been widely shown that enhanced algorithmic transparency and interpretability can help eliminate bias, and sometimes simple and transparent models are able to outperform complex black-box models (Rudin et al., 2020; Rudin and Shaposhnik, 2019; Rudin, 2019; Hu et al., 2019). (Wang et al., 2013) demonstrate that Bayesian investment model can significantly improve investors’ investment decisions based on other investment models. (Choudhury et al., 2020) argue that human capital and machine learning can complement each other through combining algorithms and domain expertise or knowledge. Many other de-biasing methods draw from the perspective of statistics, optimizations, and behaviors, etc (Berk et al., 2017; Lum and Johndrow, 2016; Hardt et al., 2016; Kamiran et al., 2010; Fu et al., 2021).

In terms of decision quality, human predictions often tend to be less accurate, which can negatively affect the quality of their decisions. This is because on the one hand, people may have resource-limited brain to process complex computation in evaluation (Icard, 2018); on the other hand, people may use a simple updating rule, which, for example, linearly combines their personal experience and accumulated knowledge for repeated tasks (Jadbabaie et al., 2012).

Compared with human decision making, algorithm-based decision making has demonstrated superior ability to achieve better accuracy and handle more complex information. Human v.s. machine decision making has been widely studied in healthcare area. In most cases, machine learning models outperformed or tied the judgment accuracy of an average clinician (Camerer, 2019), and only a small fraction of clinicians were more accurate than machine learning models (Goldberg, 1970, et al.). Mechanical-prediction techniques were about 10% more accurate than clinical predictions (Grove et al., 2000). Very simple actuarial method (i.e., linear combination of criterion variables) has been shown to consistently perform than clinical judgment (Dawes, 1971).

However, there are also scenarios in which human experts can outperform machines. Some examples are tasks that heavily require theory-driven judgement that are not suitable for statistical models; rare events or outliers that have never been seen by algorithms; complex configural relationships between the features and the dependent variable

(Dawes et al., 1989)

. In our context, micro-loan approval decisions do not face these problems that make humans better than machines. And in finance industry, algorithms are widely used to identify credit risks, with XGBoost

(Chen and Guestrin, 2016) as the most popular one. Using XGBoost as our focus machine learning model, we examine its bias inherited from human decisions.

Methodologically, our paper also builds upon the abundant work on modeling human decision dynamics through structural models. (Erdem et al., 2008) use Bayesian learning framework to model consumers’ brand choices under quality signals from advertisement, price and past consumption experiences. (Huang et al., 2014) investigate the dynamic of users’ idea posting behavior on a crowd sourcing platform. (Zhang et al., 2019) study participants’ learning behavior from superstars in crowdsourcing contests. (Zhang et al., 2020) examine taxi drivers’ learning behavior based on fine-grained GPS observations. In this paper, we model the learning dynamics in loan application evaluators’ decisions, and disentangle and estimate preference-based bias and belief-based bias in their behavior.

3. Research Context and Data

3.1. Context

We obtained our data from a leading Asian micro-lending platform. The platform was founded in 2011 and offers microloans at an average size of approximately $450 USD. Loan applicants on the platform use the loans primarily to fulfill temporary financial needs including supplementary working capital for small businesses, irregular shopping needs, education spending, and medical expenses. To apply for a loan, applicants must provide their personal information such as name, gender, age, income level, and a copy of their national identity cards. They must also provide their contact persons. People under the age of 18 and all students at school or university are not qualified to apply as they usually have no independent income sources. The loan term ranges from one to eight months. The annual interest rate charged by the platform is approximately 18% (plus or minus 1%, depending on the credit line of the borrowers).

(a) Frequency of loan applications
(b) Number of approved loans
Figure 1. Statistics of Loan Applications

During our sample period, the platform evaluates applicants’ credit risk manually by its employees (i.e., evaluators). All the evaluators are trained regularly to maintain consistent evaluation criteria, which are derived from their collective daily work experience. Besides, no gender bias training has been conducted by the platform. They do not use any AI technologies (e.g., machine learning) as automatic or auxiliary tools for evaluation. In specific, after an applicant fills in her personal information and submits the application in the system, he will be randomly assigned to an evaluator. Then, the evaluator decides whether to issue the loan by assessing whether the applicant can bring positive economic benefit (i.e., profit) to the platform. Loan profit is roughly calculated based on the predicted probabilities of delinquency and default. If a borrower fails to repay an installment, he will be regarded as delinquent. If a loan is unpaid 90 days or more after the due date, default is confirmed by the platform. 222These definitions and operations are similar to those in other literature such as (Drozd and Serrano-Padial, 2017) and (Pope and Sydnor, 2011).

The estimations of the chances of delinquency and default are based on the collected personal information for all new applicants. For repeated applicants, evaluators will additionally leverage information from the applicants’ repayment performance on previous loans, such as the final overdue days (denoted as ), the proportion of overdue installments (denoted as ), the proportion of installments with positive attitude from the borrower (denoted as ), which is measured from the records of whether a borrower has presentedshown a positive attitude fortowards their financial obligation during his or her communication with the platform, and the proportion of installments with financial help from family or friends (denoted as ).

When a borrower becomes delinquent, the platform will impose financial penalty on them. Simultaneously, debt collection methods such as sending reminder notifications to them and their contact persons will be implemented. Borrowers in default are prohibited from applying for loans again on the focal platform. Default records are also submitted to the personal credit record system maintained by the central government and a shared blacklist system maintained by a symposium of micro-finance institutions. The platform may take legal actions against defaulters.

3.2. Data and Description

Our data set contains fine-grained information of both the applicants whose submitted applications were approved and those whose applications were rejected by the platform between January 2015 and September 2017 (i.e., 33 months). During the sample period, there are 311,200 loan applications in total, among which 135,938 loan applications (taking up or approval rate 43.68%) were approved, whereas 175,263 were rejected by the platform. Our sample covers 139,454 borrowers; that is, the average number (frequency) of loan applications per borrower is 2.23 (= 311,200/139,454). In our sample, 53,503 (38.37%) borrowers applied more than once, and they contributed 225,248 applications in total (i.e., 4.21 on average per borrower). For these multiple-time borrowers, the average approval rate is 47.24%. The average approval rate is only 34.34% for the 85,951 borrowers who applied just once. This indicates the platform’s preference towards repeated borrowers, which is reasonable as these borrowers have performed well in historical loans. Figures 0(a) and 0(b) display the distributions of the frequency of loan applications and number of approved loans respectively.

For all the applicants, we obtain their demographic and socioeconomic data including gender, education level, monthly income level, disposable personal income per capita (DPI) of their home city, and house ownership, and their loan information including loan amount, loan term, and annual interest rate. We likewise have detailed per-installment repayment information of the approved loans. Table 1 summaries information of the approved and rejected applications.

Observations Repeated applicants New applicants
Mean S. D. Mean S. D.
Gender (1 = female, 0 = male) 53,503/85,952 0.19 0.39 0.18 0.39
Education level 53,503/85,952 2.25 0.64 2.27 0.63
Monthly income level 53,503/85,952 3.27 1.80 3.37 1.90
Home city DPI 53,503/85,952 2.34 1.31 2.54 1.49
House ownership (1 = self-own) 53,503/85,952 0.17 0.38 0.16 0.37
Loan amount (USD) 311,200 460.70 81.59 458.78 321.30
Loan term (month) 311,200 5.88 1.54 5.86 1.35
Yearly interest rate (%) 311,200 14.05 1.28 14.36 1.44
Note: Education level: 1 = middle school; 2 = technical school; 3 = undergraduate; 4 = postgraduate.
Table 1. Information of the Approved and Rejected Applications

3.3. Model-Free Analyses

On the platform, female applicants are generally more likely to be approved. The gaps between female and male approval rates do not shrink over time(Figure 1(a)), implying that the platform evaluators may have a persistent impression of credit risks between males and females.

(a) Average approval rate for all applicants
(b) Average default rate for new applicants
(c) Average default rate for repeated applicants
Figure 2. Time Trends of Approval and Default Rate
Figure 3. Approval Rate by Gender

When we consider only borrowers who have applied repeatedly (Figure 3), the approval gender gap shrinks, suggesting that learning helps amend the platform evaluators’ prior bias in gender. Figure 4 shows the trend of the default rate as borrowers as the number of applications or the number of their previously approved loans increases. Consistently, the gender gap in the default rate is smaller for repeated borrowers; and it keeps decreasing as the borrowers’ number of previous applications increases, and as number of previous approvals increases. This is consistent with the decreasing gender gap in the approval rate, which indicates that evaluators can learn effectively from users’ previous repayment behaviors and adjust their prior belief-based bias in gender.

(a) Average Default Rate by Gender for users with different number of applications
(b) Average Default Rate by Gender for users with different number of approvals
Figure 4. Average Default Rate by Gender

As noted earlier, the evaluator relies on four signals from the users’ past repayment behaviors to make approval decisions on loan applications. Figure 5 shows the values of the four signals against the number of previous applications by genders. Generally, we find that, given the number of previous applications, all the four signals show similar values between females and males, indicating that these signals may help the platform evaluators to adjust their prior bias in gender.

(a) Signal D by Gender for users with different number of applications
(b) Signal M by Gender for users with different number of applications
(c) Signal A by Gender for users with different number of applications
(d) Signal H by Gender for users with different number of applications
Figure 5. Signals Across Number of Applications

4. Model

We consider a model in which the loan platform decides whether or not to approve a loan application by applicant at time (here indicates the -th application rather than a natural time unit). We model the evaluator behavior in a dynamic environment where the evaluator is uncertain about the true credit quality of applicants. When a new borrower comes to the micro-lending platform, the evaluator only observes her demographics and form a prior belief based on the demographic data. For every borrower, without any previous repayment behavior being observed, her first application is preprocessed by the evaluator using the prior belief. If a borrower’s loan application is approved at , then at the time of her next loan application, i.e., , the evaluator will use the previous repayment behaviors as additional signals of the borrower’s credit quality.

At time , without any observation of borrower ’s repayment behaviors, given ’s demographics , the evaluator forms a belief of her credit quality with mean

and variance

. includes gender, age, education level, marriage status, number of children, house ownership, monthly income, the disposable personal income (DPI) of borrowers’ living cities (in 2017). We also incorporate a constant 1 and the time of ’s first application into

, because the overall borrowers’ qualities may keep changing over time. We assume that the prior belief follows a normal distribution:

(1)

where is the evaluator’s subject prior belief about the mean credit quality of a borrower with demographics . The coefficient for gender is , where gender , stands for males and stands for females. We normalize to be zero. Therefore captures the belief-based bias. For every borrower, we assume that the evaluator believes that their credit qualities are of identical uncertainty, i.e., is identical for all .

When the evaluator processes the loan application of borrower at time , she has an information set , which contains all loan repayment histories up to the loan of time . Given the information set, the evaluator form an posterior belief of . Let be the expectation of the evaluator’s belief for ’s credit quality at time , i.e.

(2)

At time t, when the evaluator is reviewing the loan application from borrower , she observes four signals from ’s repayment behavior on previous loans. Consider when an applicant applies for a loan at time , the evaluator observes four signals of the repayment behavior of her last loan, the final overdue days , the proportion of overdue installments , the proportion of installments with positive attitude from the borrower , and the proportion of installments with financial help from family or friends . Some of these signals may be more informative than others. Therefore, we tried several different combinations of these signals, and drop from our signal list in our final model as turns out to be uninformative and does not affect evaluators’ loan approval decisions significantly..

In previous literature (Erdem et al., 2008; Huang et al., 2014; Zhang et al., 2020, et al.)

, it is common to model such behaviors using Bayesian updates of a normal-normal conjugate prior-posterior. We also start with such a conventional normal-normal conjugate Bayesian model, i.e., each time when additional signals become available, the evaluator updates her belief of a borrower’s credit quality in a Bayesian fashion. However, the estimated parameters of such a Bayesian learning model show that all variances of these signals are extremely small. This indicates that evaluators weight the most recent signals heavily and are not updating their beliefs in a Bayesian fashion. Therefore, we use a simplified model to capture the evaluators’ updating behaviors, where the posterior belief is a weighed sum of the prior belief and the new signals. We tried different combinations of the four signals. The weighted sum model with signals

, , and performs the best and achieves the highest likelihood. We use this mode as our main model. In this section, we introduce this model in detail; and in the appendix (Section LABEL:sec:appendix), we summarize the detailed of the Bayesian one.

Because the final overdue days has a long-tail distribution, we take the logarithm of and use for subsequent calculations. We assume the evaluators believe that all these signals are linearly related to the credit quality in the following way:

(3)

where , , are parameters of slopes, , , and are corresponding intercepts, and , , and are the means of , , . We assume each borrower’s signals are distributed surrounding their means. At each time when the evaluator observes these repayment behaviors, she updates her belief about the credit quality based on a weighted sum of the prior and the signals:

(4)

where are the wights assigned to the three signals. At time , the evaluator decides whether to approve ’s loan application based on her updated belief of the applicant’s quality

. The evaluator first calculates the probability of non-default through a sigmoid function:

(5)

Then, with the probability of non-default , the evaluator decides to approve or reject ’s loan application of time through a utility function. There are two key components in the utility function (Equation 6). The first component captures the expected profit of approving this loan. The second component contains the evaluator’s preference based bias. And we also assume there is a random shock within the utility function. The utility function is as follows:

(6)

where , and stands for males and stands for females, is the preference-based bias with normalized to be zero. Therefore, captures the preference-based bias, which persists and is not affected by observing new signals. is the profit earned by the platform if the loan is paid back, and is the loss the platform incurs if the loan defaults. Both and are observed values in our dataset. is the price parameter (or marginal utility of money). is the non-default probability of at time , which is related to its quality . All these parameters are estimated through maximizing the likelihood.

4.1. Estimation Results

We report parameter estimates for our model in Table 2. Both the preference-based bias (we normalize to be zero) and the belief-based bias have expected signs (), implying that the evaluator has a preference for female loan applicants. The estimate of the belief based bias is significantly negative, suggesting that evaluators have a lower prior belief for males’ credit qualities. The estimate of is significantly positive. This implies that there is a significant preference-based bias in gender that favors female applicants, which cannot be corrected by observing repayment behaviors.

Apart from , all other s also have expected signs. This is consistent with the evaluator’s preference for applicants with a better socioeconomic status as we observe in the data.

Paramteters Estimate Std. error
0.2390 0.0220
Signal D
-0.0138 0.0008
0.5219 0.0207
Signal A
0.3492 0.0034
0.3788 0.0063
Signal H
0.9803 0.0800
0.1252 0.0399
z 0.0154 0.0001
Coefficients of the prior
-0.8961 0.0119
-0.1042 0.0088
-0.0201 0.0003
0.1458 0.0058
0.2443 0.0038
0.0936 0.0013
0.1176 0.0018
0.0097 0.0007
0.9780 0.0058
0.0121 0.0009
Note: *; **; ***

Table 2. Structural Model Estimation Results

Our estimates of the slopes in the signal D-quality, signal A-quality and signal H-quality relationships are negative (), positive () and positive () respectively. These results suggest that in the evaluator’s decision-making process, larger final overdue days are associated with poor credit quality; while the proportion of installments with a positive attitude is positively associated with loan approvals. Getting financial help from family and friends is also viewed by evaluators as a positive signal for credit quality and is associated with a lower default probability.

As can be seen in Equation 4, the evaluator updates her belief based on a weighted sum of the prior belief and the signals. The estimates of the weights , and indicate that the evaluator gives most weight to the signal , with a wight as high as 0.9780.

In Table 3, we compare the characteristics of the actual observed approved users and the expected values of the characteristics of the approved users our structural model predicts. All these statistics are very similar between the actual observations and our model’s predictions. This suggests our model captures the decision process well.

number of users number of females mean of housing mean of DPI mean of education mean of income
t SM actual SM actual SM actual SM actual SM actual SM actual
1 53539.33 51019 12113.01 11797 0.085023 0.082192 1.078531 1.038077 0.967054 0.926234 1.525907 1.470026
2 22371.7 22486 4742.72 4690 0.03179 0.031874 0.462848 0.464454 0.389653 0.391233 0.621982 0.626156
3 14110.73 13993 2802.512 2734 0.018555 0.018551 0.277881 0.27404 0.240154 0.237777 0.379792 0.376518
4 10139.22 10392 1967.239 2015 0.013 0.013295 0.192308 0.194846 0.17002 0.174488 0.264952 0.270864
5 7917.322 8335 1520.431 1605 0.009977 0.010598 0.146994 0.153377 0.1317 0.138655 0.202175 0.211797
6 6376.219 6915 1222.235 1314 0.008252 0.009007 0.117074 0.125568 0.105735 0.114719 0.161333 0.174509
7 5035.121 5481 970.1422 1056 0.006301 0.006884 0.092232 0.099043 0.083498 0.090826 0.126435 0.137056
8 3629.12 3968 692.8316 755 0.004384 0.004905 0.065735 0.0714 0.060267 0.065685 0.090112 0.098269
9 2342.886 2564 441.4362 484 0.002798 0.003091 0.042753 0.046345 0.038917 0.042602 0.057552 0.062881
10 1150.928 1272 219.6785 234 0.001276 0.001413 0.020986 0.02309 0.019207 0.021233 0.029364 0.032297
11 436.4425 475 80.60686 89 0.000486 0.000552 0.00845 0.009093 0.007253 0.007866 0.011423 0.012348
12 130.8604 129 21.06548 20 0.000149 0.000136 0.002757 0.002718 0.002312 0.002273 0.003748 0.003571
13 47.87013 51 8.762629 9 3.74E-05 3.59E-05 0.000959 0.001011 0.000824 0.000875 0.001393 0.001492
14 14.98715 15 1.997452 2 1.35E-05 1.43E-05 0.00036 0.000359 0.000251 0.000251 0.000443 0.000437
15 5.211481 5 0 0 0 0 0.00013 0.0001 9.28E-05 9.32E-05 0.000181 0.000172
16 0.999583 1 0 0 0 0 5.73E-05 5.74E-05 1.43E-05 1.43E-05 2.87E-05 2.87E-05
Table 3. Comparison of Simulated and Actual Characteristics of Approved Users

5. Policy Simulations

We conduct several sets of counterfactual simulations to evaluate the effects of eliminating the biases found in the data on the outcome of loan applications across different gender groups. Our counterfactual analyses are done on a second dataset. It covers all the loan records from a one-month experimental period (“full sample” hereafter). During this period, all applicants are approved without screening. As a result, we have true label of all users. This ensures our results are based on the entire user distribution, rather than just the approved users, which have a different distribution from the whole user pool.

Specifically, we calculate the expected profits of the platform based on the predicted approval probability by a number of variants of the estimated structural model. We also examine the gender gap in the approval true positive rate (TPR). We adopt the concept of equal opportunity, one of the most popular fairness notions, to measure the extent of bias. Equal opportunity requires that qualified individuals, no matter what their sensitive attributes are, have an equal opportunity to receive favorable outcomes. In our loan application setting, this means two gender group should have the same true positive rates. We find that the elimination of either the preference-based bias or the belief-based bias can simultaneously increase the platform’s profit and reduce the gender gap in loan approval decisions (TPR). In Section 6, We also feed the counterfactual datasets into machine learning algorithms to see how machine captures these different behaviors.

Figure 6. Two datasets used in the training and evaluation

5.1. Does eliminating preference-based bias help with the decision making? How does it affect the platform’s payoff?

As noted above, the preference-based bias refers to people’s animus towards a particular group. No matter in reality how well this certain group behaves, people with preference-based bias always make judgements and decisions by prejudice. In our setting, we focus on the preference-based bias in gender. In our model, captures the preference-based bias. For applicant of gender ( for males and for females), we normalize for females to be zero. And the estimation result for preference-based bias is . Note that in Equation 6, we have the profit minus sign in front of , therefore suggests the evaluator has a preference for female borrowers and a prejudice against male borrowers.

In an ideal setting, all the evaluators are trained and the prejudice to gender is completely removed. The elimination of the preference-based bias can be operationalized by setting to be zero. We simulate the evaluators’ decisions with but all other parameters unchanged. We then compare the true positive rate (TPR) and the payoff under the original decision process and the one with preference-based bias removed on our full sample.

When the evaluators make approval decisions without the preference-based bias in gender, the TPR for male users are all higher than their corresponding value from the original decision process. Note that for female users, , thus their TPR stays the same under two different decision-making process. In sum, eliminating the preference-based bias can generally decrease the TPR gap between the two gender groups in our setting. But this decrease is relatively small (from 9.69% to 6.94%, Table 5).

We also compare the platform’s expected welfare (profit) under the current decision-making process and the one with preference-based bias removed. We observe that by eliminating the preference-based bias () in the loan approval process, the platform obtains a higher profit (Table 4). The increase in the profit results from better decisions made on male applicants. Specifically, the increase driven by the gain from lowering the approval probability for male borrowers who eventually default on loans, which exceed the loss of lowering the approval probability for nondefault male borrowers. These findings suggest that although the current decision-making process incorporates gender information to identify high quality users, the evaluators over underrate male users and disdain male borrowers too much. Therefore, the preference-based bias results in suboptimal decisions.

5.2. Does removing gender from the prior belief formation help with the decision making? How does it affect the platform’s payoff?

In this subsection, we examine the effects of removing the gender information in the prior belief, i.e. set to be zero. ( for males and for females) captures the belief-based bias. Belief-based bias refers to evaluators’ subjective beliefs about certain groups. This kind of belief can be updated through observing additional behavioral signals, i.e. repayment behaviors in this paper. In our setting, we normalize female users’ to be zero, therefore captures the relative belief-based bias for females compared with males. The estimated value of is , which indicates the evaluators have a subjective prior belief in favor of female borrowers. We compare the TPR and the platform’s payoff under the original decision process and the one with belief-based bias removed on our full sample.

Under the current decision making process, we observe a higher TPR for females than males. Since we normalized to be zero, the TPR of females does not change between the two decision processes. Gender information in the prior decreases the male TPR. With access to the gender information, the evaluators form a prior belief that underrates female borrowers. These additional rejected males are generally of good enough credit qualities.

151046.8 156957.8
155036.7 160190.4
Table 4. The expected profits of different decision process.
9.69% 5.14%
6.94% 2.48%
Table 5. The gender gap in the credit evaluation TPR (female’s TPR minus male’s TPR).
12.31% 5.51%
8.62% 2.04%
Table 6. The gender gap in the credit evaluation TPR (female’s TPR minus male’s TPR). New users only.
5.13% 4.10%
3.85% 2.70%
Table 7. The gender gap in the credit evaluation TPR (female’s TPR minus male’s TPR). Repeated users only.

We further investigate the effect of on the expected profit of the platform. With the belief-based bias removed, the platform obtains a larger profit. This suggests that setting to zero leads to an increase in the probability of approval for all males, and the gain from lowering the approval probability for default male borrowers dominates the loss from lowering the approval probability for nondefault male users.

On the borrower side, when the belief-based bias is removed, the gender gap in TPR becomes smaller (Table 5). It decreases from 9.69% to 5.14%. This decrease is larger than the one resulting from eliminating the preference-based bias. When the both biases are ruled out, we can achieve the smallest gender gap in the TPR (2.48%).

5.3. Are these effects different for new users and repeated users?

To further investigate the underlying mechanism, we look into the new users and the repeated users separately. Table 6 shows the TPR gaps in the original and counterfactual settings of the new users, and Table 7 shows the TPR gaps of the repeated users. Consistent with previous observation, eliminating either bias can help mitigate the bias for both new users and repeated users.

We observe that overall the repeated users have smaller bias than new users. This is because of the different characteristic distributions between original new and repeated applicants, and the signals used for learning repeated applicants’ credit quality. When removing the preference-based bias but keeping the belief-based bias, human evaluators’ decision bias toward gender decrease from 12.31% to 8.62% for new applicants and from 5.13% to 3.85% for repeated applicants. If we keep preference-based bias but remove belief-based bias, compared with the former case, human evaluators’ decision bias toward gender has a larger decrease to 5.51% for new applicants, but a smaller decrease to 4.10%. This is because for repeated users, human make use of signals to update the prior belief. The belief-based bias becomes less influential for repeated users.

6. Machine Learning Bias Inherited from Human Bias

In this section, we examine how ML algorithms inherit human bias, by feeding the real-word dataset and the counterfactual datasets into ML algorithms. Specially, for the counterfactual setting, we run our structural model with counterfactual parameters to simulate the approval decision; we use the approved users’ loan records as the input for our machine learning model. For users who were not approved in reality, we do matching to fill their loan repayment behavior. Here we use XGBoost (Chen and Guestrin, 2016) as our machine learning model. XGBoost is widely used in loan evaluations and related literature (Lu et al., 2019; Fu et al., 2021). As in Section 5, we report the the expected profits (Table 8) and the gender gap of the TPR (Table 9).

For the XGBoost model, when the preference-based bias is removed, the expected profit increases from 167260.2 to 172125.8 (increases by 4865.6, compared with the structural model’s corresponding increase of 3,989.9), and the TPR gap decreases from 7.49% to 6.20% (decreases by 1.29%, compared with the structural model’s decrease of 2.75%). This indicates that machine learning models can mitigate the effects of the preference-based bias. When the belief-based bias is removed, the expected profit increases from 167260.2 to 171352.0 (by 4091.8, compared with the structural model’s increase of 5911), and the TPR gap decreases from 7.49% to 7.43% (decreases by 0.06%, compared with the structural model’s decrease of 4.55%). This indicates that machine learning models can mitigate the effects of the belief-based bias.

167260.2 171352.0
172125.8 174534.7
Table 8. The expected profits of different decision process.
7.49% 7.43%
6.20% 5.57%
Table 9. The gender gap in the credit evaluation TPR (female’s TPR minus male’s TPR).
6.86% 6.63%
4.66% 3.98%
Table 10. The gender gap in the credit evaluation TPR (female’s TPR minus male’s TPR). New users only.
2.05% 2.31%
2.30% 1.94%
Table 11. The gender gap in the credit evaluation TPR (female’s TPR minus male’s TPR). Repeated users only.

Similarly, we separate the new users and the repeated users to check the lower level effects (Table 10 and Table 11). Consistent with our observation in Section 5, repeated users have smaller bias than new users. This indicates that signals can help mitigate bias in machine learning models as well. Although for repeated users, there is a slight increase when removing any one of the two bias (2.05% 2.30%, 2.05% 2.31%), the increase is not significant (p-value = 0.5312; p-value = 0.3047). This support that for machine learning models, removing bias can help mitigate the bias for both new users and repeated users. When we compare the de-biased outcome of human decision making and algorithm decision making, we find that human can achieve smaller TPR gap (2.04%) than the algorithm (3.98%) for new applicants, but lager TPR gap (2.70%) than the algorithm (1.94%) for repeated applicants. This finding suggests that the optimal strategy is to provide enough training to human evaluators to eliminate their decision bias, and then use human evaluators on new users while use machine learning algorithm to evaluate repeated users.

7. Discussion and Conclusions

We have proposed a framework to use structural modeling to distinguish and estimate two types of human bias, i.e., belief-based bias and preference-based bias, based on observational data. In our micro-lending context, the evaluators hold a persistent preference-based bias but learn from three distinct signals (the final overdue days , the proportion of installments with positive attitude from the borrower , the financial help from family and friends ), which updates the evaluators’ belief-based bias. The model was estimated on real-world data, and our model explains the data well.

The estimation results by the structural model imply that the evaluators possess a preference-based bias in favor of female applicants and against male ones; they also hold a belief-based bias with a higher prior belief of females’ credit qualities. By observing the repayment behaviors, the evaluators can quickly update their belief of the borrowers’ credit qualities. And all the three signals play significant roles in the evaluators’ learning.

The results from our policy simulations suggest that both the eliminations of the preference-based bias and the belief-based bias can increase the platform’s profits. The underlying mechanisms of the two counterfactual settings are the same. Because the loss from lowering the approval probability for nondefault users is smaller than the gain from lowering the approval probability for default users, the platform achieves higher profits. One the borrower side, the eliminations of both types of bias can reduce the gender gap in the credit evaluation true positive rate.

We also feed the real-word dataset and the counterfactual datasets into XGBoost model, to examine how ML algorithms inherit human bias. We find that machine learning algorithm can mitigate both the preference-based bias and the belief-based bias.

Our paper also has certain limitations that can be addressed in future work. First, the microloan users are generally not stable in their financial condition, which may be one plausible reason why the evaluators heavily rely on the latest repayment behaviors to form a belief of borrowers’ credit qualities. In a more stable setting like credit card or mortgage, evaluators may gradually update their beliefs of borrowers’ credit qualities. Second, in our policy simulations, we only consider the changes on the evaluator side. In reality, the changes in previous evaluator approval behaviors can also lead to changes in subsequent application behaviors of borrowers. Future work may take both sides into consideration. Third, as a pioneer work on quantifying different types of bias, we do not consider the interaction of gender and other attributes due to model complexity and identification issues. Future work may explore those interaction effects. Despite these limitations, to our best knowledge, this paper is the first to use structural modeling to uncover and distinguish the different types of bias in decision-making processes based on observational data. As machine learning and AI models are increasingly deployed across many decision-making scenarios, it is more and more important to understand the source of biases and propose well-targeted solutions.

References

  • A. F. Alesina, F. Lotti, and P. E. Mistrulli (2013) Do women pay more for credit? evidence from italy. Journal of the European Economic Association 11, pp. 45–66. Cited by: §2.
  • D. Arnold, W. Dobbie, and C. S. Yang (2018) Racial bias in bail decisions. The Quarterly Journal of Economics 133 (4), pp. 1885–1932. Cited by: §1.
  • R. Bartlett, A. Morse, R. Stanton, and N. Wallace (2019) Consumer-lending discrimination in the fintech era. Technical report National Bureau of Economic Research. Cited by: §2, §2.
  • R. Berk, H. Heidari, S. Jabbari, M. Joseph, M. Kearns, J. Morgenstern, S. Neel, and A. Roth (2017) A convex framework for fair regression. arXiv preprint arXiv:1706.02409. Cited by: §2.
  • M. Bertrand, D. Chugh, and S. Mullainathan (2005) Implicit discrimination. American Economic Review 95 (2), pp. 94–98. Cited by: §1.
  • M. Biernat and M. Manis (1994) Shifting standards and stereotype-based judgments.. Journal of personality and social psychology 66 (1), pp. 5. Cited by: §1.
  • D. G. Blanchflower, P. B. Levine, and D. J. Zimmerman (2003) Discrimination in the small-business credit market. Review of Economics and Statistics 85 (4), pp. 930–943. Cited by: §2.
  • J. A. Bohren, A. Imas, and M. Rosenberg (2019) The dynamics of discrimination: theory and evidence. American economic review 109 (10), pp. 3395–3436. Cited by: §1, §2, §2.
  • S. Cai, X. Lin, D. Xu, and X. Fu (2016) Judging online peer-to-peer lending behavior: a comparison of first-time and repeated borrowing requests. Information & Management 53 (7), pp. 857–867. Cited by: §2.
  • C. F. Camerer (2019)

    24. artificial intelligence and behavioral economics

    .
    In The Economics of Artificial Intelligence, pp. 587–610. Cited by: §2.
  • D. Chen, X. Li, and F. Lai (2017) Gender discrimination in online peer-to-peer credit lending: evidence from a lending platform in china. Electronic Commerce Research 17 (4), pp. 553–583. Cited by: §2.
  • T. Chen and C. Guestrin (2016) Xgboost: a scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794. Cited by: §2, §6.
  • X. Chen, B. Huang, and D. Ye (2020) Gender gap in peer-to-peer lending: evidence from china. Journal of Banking & Finance 112, pp. 105633. Cited by: §2.
  • P. Choudhury, E. Starr, and R. Agarwal (2020) Machine learning and human capital complementarities: experimental evidence on bias mitigation. Strategic Management Journal 41 (8), pp. 1381–1411. Cited by: §2.
  • A. Chouldechova (2017) Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big data 5 (2), pp. 153–163. Cited by: §2.
  • S. Corbett-Davies and S. Goel (2018) The measure and mismeasure of fairness: a critical review of fair machine learning. arXiv preprint arXiv:1808.00023. Cited by: §1, §1.
  • B. Cowgill, F. Dell’Acqua, S. Deng, D. Hsu, N. Verma, and A. Chaintreau (2020) Biased programmers? or biased data? a field experiment in operationalizing ai ethics. In Proceedings of the 21st ACM Conference on Economics and Computation, pp. 679–681. Cited by: §2.
  • X. Cui (2019) Occupational identity discrimination in peer-to-peer lending. Journal of Applied Finance and Banking 9 (6), pp. 249–278. Cited by: §2.
  • D. Cumming, T. Y. Leung, and O. Rui (2015) Gender diversity and securities fraud. Academy of management Journal 58 (5), pp. 1572–1593. Cited by: §1.
  • R. M. Dawes, D. Faust, and P. E. Meehl (1989) Clinical versus actuarial judgment. Science 243 (4899), pp. 1668–1674. Cited by: §2.
  • R. M. Dawes (1971) A case study of graduate admissions: application of three principles of human decision making.. American psychologist 26 (2), pp. 180. Cited by: §2.
  • W. Dobbie, A. Liberman, D. Paravisini, and V. Pathania (2018) Measuring bias in consumer lending. Technical report National Bureau of Economic Research. Cited by: §2.
  • L. A. Drozd and R. Serrano-Padial (2017) Modeling the revolving revolution: the debt collection channel. American Economic Review 107 (3), pp. 897–930. Cited by: footnote 2.
  • T. Erdem, M. P. Keane, and B. Sun (2008) A dynamic model of brand choice when price and advertising signal product quality. Marketing Science 27 (6), pp. 1111–1125. Cited by: §2, §4.
  • M. Ewens and R. R. Townsend (2020) Are early stage investors biased against women?. Journal of Financial Economics 135 (3), pp. 653–677. Cited by: §2.
  • M. F. Ferguson and S. R. Peters (1995) What constitutes evidence of discrimination in lending?. The Journal of Finance 50 (2), pp. 739–748. Cited by: §2.
  • R. Fu, M. Aseri, P. V. Singh, and K. Srinivasan (2019) ’Un’fair machine learning algorithms. Available at SSRN 3408275. Cited by: §1, §2.
  • R. Fu, Y. Huang, and P. V. Singh (2021) Crowds, lending, machine, and bias. Information Systems Research 32 (1), pp. 72–92. Cited by: §1, §2, §6.
  • A. Fuster, P. Goldsmith-Pinkham, T. Ramadorai, and A. Walther (2020) Predictably unequal? the effects of machine learning on credit markets. The Effects of Machine Learning on Credit Markets (October 1, 2020). Cited by: §1, §2.
  • U. Gneezy, J. List, and M. K. Price (2012) Toward an understanding of why people discriminate: evidence from a series of natural field experiments. Technical report National Bureau of Economic Research. Cited by: §1, §2.
  • L. R. Goldberg (1970) Man versus model of man: a rationale, plus some evidence, for a method of improving on clinical inferences.. Psychological bulletin 73 (6), pp. 422. Cited by: §2.
  • W. M. Grove, D. H. Zald, B. S. Lebow, B. E. Snitz, and C. Nelson (2000) Clinical versus mechanical prediction: a meta-analysis.. Psychological assessment 12 (1), pp. 19. Cited by: §2.
  • M. Hardt, E. Price, and N. Srebro (2016)

    Equality of opportunity in supervised learning

    .
    arXiv preprint arXiv:1610.02413. Cited by: §1, §1, §2.
  • X. Hu, C. Rudin, and M. Seltzer (2019)

    Optimal sparse decision trees

    .
    In Advances in Neural Information Processing Systems, Vol. 32, pp. . Cited by: §2.
  • Y. Huang, P. Vir Singh, and K. Srinivasan (2014) Crowdsourcing new product ideas under consumer learning. Management science 60 (9), pp. 2138–2159. Cited by: §2, §4.
  • T. F. Icard (2018) Bayes, bounds, and rational analysis. Philosophy of Science 85 (1), pp. 79–101. Cited by: §1, §2.
  • A. Jadbabaie, P. Molavi, A. Sandroni, and A. Tahbaz-Salehi (2012) Non-bayesian social learning. Games and Economic Behavior 76 (1), pp. 210–225. Cited by: §2.
  • F. Kamiran, T. Calders, and M. Pechenizkiy (2010) Discrimination aware decision tree learning. In 2010 IEEE International Conference on Data Mining, pp. 869–874. Cited by: §2.
  • D. KIM (2020) The importance of a borrower’s track record on repayment performance: evidence in p2p lending market. The Journal of Asian Finance, Economics, and Business 7 (7), pp. 85–93. Cited by: §2.
  • J. Kleinberg, J. Ludwig, S. Mullainathan, and A. Rambachan (2018a) Algorithmic fairness. In Aea papers and proceedings, Vol. 108, pp. 22–27. Cited by: §1, §2.
  • J. Kleinberg, J. Ludwig, S. Mullainathan, and C. R. Sunstein (2018b) Discrimination in the age of algorithms. Journal of Legal Analysis 10, pp. 113–174. Cited by: §2.
  • A. Lambrecht and C. Tucker (2019) Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of stem career ads. Management Science 65 (7), pp. 2966–2981. Cited by: §2.
  • M. Lin and S. Viswanathan (2016) Home bias in online investments: an empirical study of an online crowdfunding market. Management Science 62 (5), pp. 1393–1414. Cited by: §2.
  • T. Lu, Y. Zhang, and B. Li (2019) The value of alternative data in credit risk prediction: evidence from a large field experiment. International Conference on Information Systems. Cited by: §1, §2, §6.
  • Y. Lu, B. Gu, Q. Ye, and Z. Sheng (2012) Social influence and defaults in peer-to-peer lending networks. Cited by: §1.
  • K. Lum and J. Johndrow (2016) A statistical framework for fair predictive algorithms. arXiv preprint arXiv:1610.08077. Cited by: §2.
  • A. Mateescu (2015) Peer-to-peer lending. Data & Society Research Institute 2. Cited by: §1.
  • C. A. Parsons, J. Sulaeman, M. C. Yates, and D. S. Hamermesh (2011) Strike three: discrimination, incentives, and evaluation. American Economic Review 101 (4), pp. 1410–35. Cited by: §1.
  • D. G. Pope and J. R. Sydnor (2011) What’s in a picture? evidence of discrimination from prosper. com. Journal of Human resources 46 (1), pp. 53–92. Cited by: footnote 2.
  • C. Rudin and Y. Shaposhnik (2019) Globally-consistent rule-based summary-explanations for machine learning models: application to credit-risk evaluation. Available at SSRN 3395422. Cited by: §2.
  • C. Rudin, C. Wang, and B. Coker (2020) Broader issues surrounding model transparency in criminal justice risk scoring.

    Harvard Data Science Review

    2 (1).
    Cited by: §2.
  • C. Rudin (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1 (5), pp. 206–215. Cited by: §2, §2.
  • J. L. Skeem and C. T. Lowenkamp (2016) Risk, race, and recidivism: predictive bias and disparate impact. Criminology 54 (4), pp. 680–712. Cited by: §1.
  • M. T. Stevenson and J. L. Doleac (2019) Algorithmic risk assessment in the hands of humans. Available at SSRN 3489440. Cited by: §2.
  • J. Sydnor and D. Pope (2011) What’sa picture? evidence of discriminations of loan fundability in the prosper. com marketplace. Journal of Human Resources 46 (1), pp. 53–92. Cited by: §2.
  • S. Tolan, M. Miron, E. Gómez, and C. Castillo (2019) Why machine learning may lead to unfairness: evidence from risk assessment for juvenile justice in catalonia. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, pp. 83–92. Cited by: §1.
  • X. Wang, D. Zhang, X. Zeng, and X. Wu (2013) A bayesian investment model for online p2p lending. In Frontiers in Internet Technologies, pp. 21–30. Cited by: §2.
  • S. Zhang and J. Y. Angela (2013) Forgetful bayes and myopic planning: human learning and decision-making in a bandit setting.. In NIPS, pp. 2607–2615. Cited by: §1.
  • S. Zhang, P. V. Singh, and A. Ghose (2019) A structural analysis of the role of superstars in crowdsourcing contests. Information Systems Research 30 (1), pp. 15–33. Cited by: §2.
  • Y. Zhang, B. Li, and R. Krishnan (2020) Learning individual behavior using sensor data: the case of global positioning system traces and taxi drivers. Information Systems Research 31 (4), pp. 1301–1321. Cited by: §2, §4.