Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

04/30/2019
by   Sahin Cem Geyik, et al.
0

Recently, policymakers, regulators, and advocates have raised awareness about the ethical, policy, and legal challenges posed by machine learning and data-driven systems. In particular, they have expressed concerns about their potentially discriminatory impact, for example, due to inadvertent encoding of bias into automated decisions. For search and recommendation systems, our goal is to understand whether there is bias in the underlying machine learning models, and devise techniques to mitigate the bias. This paper presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems. We first propose complementary measures to quantify bias with respect to protected attributes such as gender and age. We then present algorithms for computing fairness-aware re-ranking of results towards mitigating algorithmic bias. Our algorithms seek to achieve a desired distribution of top ranked results with respect to one or more protected attributes. We show that such a framework can be utilized to achieve fairness criteria such as equality of opportunity and demographic parity depending on the choice of the desired distribution. We evaluate the proposed algorithms via extensive simulations and study the effect of fairness-aware ranking on both bias and utility measures. We finally present the online A/B testing results from applying our framework towards representative ranking in LinkedIn Talent Search. Our approach resulted in tremendous improvement in the fairness metrics without affecting the business metrics, which paved the way for deployment to 100 Recruiter users worldwide. Ours is the first large-scale deployed framework for ensuring fairness in the hiring domain, with the potential positive impact for more than 575M LinkedIn members.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2023

Towards Fairness in Personalized Ads Using Impression Variance Aware Reinforcement Learning

Variances in ad impression outcomes across demographic groups are increa...
research
07/30/2020

Fairness-Aware Online Personalization

Decision making in crucial applications such as lending, hiring, and col...
research
06/08/2020

Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides

Algorithmic bias is the systematic preferential or discriminatory treatm...
research
05/14/2020

Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning

Machine learning is being integrated into a growing number of critical s...
research
08/30/2022

RAGUEL: Recourse-Aware Group Unfairness Elimination

While machine learning and ranking-based systems are in widespread use f...
research
08/04/2023

Auditing Yelp's Business Ranking and Review Recommendation Through the Lens of Fairness

Web 2.0 recommendation systems, such as Yelp, connect users and business...
research
12/30/2022

Detection of Groups with Biased Representation in Ranking

Real-life tools for decision-making in many critical domains are based o...

Please sign up or login with your details

Forgot password? Click here to reset