Fairness-Aware Online Personalization

07/30/2020
by   G. Roshan Lal, et al.
0

Decision making in crucial applications such as lending, hiring, and college admissions has witnessed increasing use of algorithmic models and techniques as a result of a confluence of factors such as ubiquitous connectivity, ability to collect, aggregate, and process large amounts of fine-grained data using cloud computing, and ease of access to applying sophisticated machine learning models. Quite often, such applications are powered by search and recommendation systems, which in turn make use of personalized ranking algorithms. At the same time, there is increasing awareness about the ethical and legal challenges posed by the use of such data-driven systems. Researchers and practitioners from different disciplines have recently highlighted the potential for such systems to discriminate against certain population groups, due to biases in the datasets utilized for learning their underlying recommendation models. We present a study of fairness in online personalization settings involving the ranking of individuals. Starting from a fair warm-start machine-learned model, we first demonstrate that online personalization can cause the model to learn to act in an unfair manner if the user is biased in his/her responses. For this purpose, we construct a stylized model for generating training data with potentially biased features as well as potentially biased labels and quantify the extent of bias that is learned by the model when the user responds in a biased manner as in many real-world scenarios. We then formulate the problem of learning personalized models under fairness constraints and present a regularization based approach for mitigating biases in machine learning. We demonstrate the efficacy of our approach through extensive simulations with different parameter settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2019

Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

Recently, policymakers, regulators, and advocates have raised awareness ...
research
07/13/2022

Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

In recent years, machine learning algorithms have become ubiquitous in a...
research
04/01/2022

FairRank: Fairness-aware Single-tower Ranking Framework for News Recommendation

Single-tower models are widely used in the ranking stage of news recomme...
research
11/17/2017

Predict Responsibly: Increasing Fairness by Learning To Defer

Machine learning systems, which are often used for high-stakes decisions...
research
08/14/2020

LiFT: A Scalable Framework for Measuring Fairness in ML Applications

Many internet applications are powered by machine learned models, which ...
research
03/20/2023

Fairness-Aware Graph Filter Design

Graphs are mathematical tools that can be used to represent complex real...
research
06/27/2022

Prisoners of Their Own Devices: How Models Induce Data Bias in Performative Prediction

The unparalleled ability of machine learning algorithms to learn pattern...

Please sign up or login with your details

Forgot password? Click here to reset