New Fairness Metrics for Recommendation that Embrace Differences

06/29/2017
by   Sirui Yao, et al.
0

We study fairness in collaborative-filtering recommender systems, which are sensitive to discrimination that exists in historical data. Biased data can lead collaborative filtering methods to make unfair predictions against minority groups of users. We identify the insufficiency of existing fairness metrics and propose four new metrics that address different forms of unfairness. These fairness metrics can be optimized by adding fairness terms to the learning objective. Experiments on synthetic and real data show that our new metrics can better measure fairness than the baseline, and that the fairness objectives effectively help reduce unfairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2017

Beyond Parity: Fairness Objectives for Collaborative Filtering

We study fairness in collaborative-filtering recommender systems, which ...
research
06/09/2020

DeepFair: Deep Learning for Improving Fairness in Recommender Systems

The lack of bias management in Recommender Systems leads to minority gro...
research
05/16/2023

Consumer-side Fairness in Recommender Systems: A Systematic Survey of Methods and Evaluation

In the current landscape of ever-increasing levels of digitalization, we...
research
12/06/2022

Pareto Pairwise Ranking for Fairness Enhancement of Recommender Systems

Learning to rank is an effective recommendation approach since its intro...
research
10/25/2021

Fair Enough: Searching for Sufficient Measures of Fairness

Testing machine learning software for ethical bias has become a pressing...
research
01/05/2021

Evaluating Fairness in the Presence of Spatial Autocorrelation

Fairness considerations for spatial data often get confounded by the und...
research
03/15/2022

Learning Expanding Graphs for Signal Interpolation

Performing signal processing over graphs requires knowledge of the under...

Please sign up or login with your details

Forgot password? Click here to reset