Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn

02/15/2022
by   YinYin Yu, et al.
0

In this paper, we derive an algorithmic fairness metric for the recommendation algorithms that power LinkedIn from the fairness notion of equal opportunity for equally qualified candidates. We borrow from the economic literature on discrimination to arrive at a test for detecting algorithmic discrimination, which we then use to audit two algorithms from LinkedIn with respect to gender bias. Moreover, we introduce a framework for distinguishing algorithmic bias from human bias, both of which can potentially exist on a two-sided platform.

READ FULL TEXT
research
07/10/2020

Algorithmic Fairness in Education

Data-driven predictive models are increasingly used in education to supp...
research
09/20/2022

Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation

The goal of this work is to help mitigate the already existing gender wa...
research
02/23/2018

An Algorithmic Framework to Control Bias in Bandit-based Personalization

Personalization is pervasive in the online space as it leads to higher e...
research
12/05/2022

Certifying Fairness of Probabilistic Circuits

With the increased use of machine learning systems for decision making, ...
research
04/12/2021

Towards Algorithmic Transparency: A Diversity Perspective

As the role of algorithmic systems and processes increases in society, s...
research
10/08/2021

Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks

Algorithmic bias is of increasing concern, both to the research communit...
research
06/01/2022

Sex and Gender in the Computer Graphics Research Literature

We survey the treatment of sex and gender in the Computer Graphics resea...

Please sign up or login with your details

Forgot password? Click here to reset