-
Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences
Algorithmic decision-making systems are increasingly used throughout the...
read it
-
Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by Ranking Algorithms
Search Engines (SE) have been shown to perpetuate well-known gender ster...
read it
-
Exploiting oddsmaker bias to improve the prediction of NFL outcomes
Accurately predicting the outcome of sporting events has been a goal for...
read it
-
What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring
Although systematic biases in decision-making are widely documented, the...
read it
-
Individually Fair Ranking
We develop an algorithm to train individually fair learning-to-rank (LTR...
read it
-
Offline Biases in Online Platforms: a Study of Diversity and Homophily in Airbnb
How diverse are sharing economy platforms? Are they fair marketplaces, w...
read it
-
Interventions for Ranking in the Presence of Implicit Bias
Implicit bias is the unconscious attribution of particular qualities (or...
read it
Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring
Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Since these platforms impact the livelihood of millions of people, it is important to ensure that the underlying algorithms are not adversely affecting minority groups. However, prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases. To address this problem, fair ranking algorithms (e.g.,Det-Greedy) which increase exposure of underrepresented candidates have been proposed in recent literature. However, there is little to no work that explores if these proposed fair ranking algorithms actually improve real world outcomes (e.g., hiring decisions) for minority groups. Furthermore, there is no clear understanding as to how other factors (e.g., jobcontext, inherent biases of the employers) play a role in impacting the real world outcomes of minority groups. In this work, we study how gender biases manifest in online hiring platforms and how they impact real world hiring decisions. More specifically, we analyze various sources of gender biases including the nature of the ranking algorithm, the job context, and inherent biases of employers, and establish how these factors interact and affect real world hiring decisions. To this end, we experiment with three different ranking algorithms on three different job contexts using real world data from TaskRabbit. We simulate the hiring scenarios on TaskRabbit by carrying out a large-scale user study with Amazon Mechanical Turk. We then leverage the responses from this study to understand the effect of each of the aforementioned factors. Our results demonstrate that fair ranking algorithms can be an effective tool at increasing hiring of underrepresented gender candidates but induces inconsistent outcomes across candidate features and job contexts.
READ FULL TEXT
Comments
There are no comments yet.