RecSys Fairness Metrics: Many to Use But Which One To Choose?

09/08/2022
by   Jessie J. Smith, et al.
0

In recent years, recommendation and ranking systems have become increasingly popular on digital platforms. However, previous work has highlighted how personalized systems might lead to unintentional harms for users. Practitioners require metrics to measure and mitigate these types of harms in production systems. To meet this need, many fairness definitions have been introduced and explored by the RecSys community. Unfortunately, this has led to a proliferation of possible fairness metrics from which practitioners can choose. The increase in volume and complexity of metrics creates a need for practitioners to deeply understand the nuances of fairness definitions and implementations. Additionally, practitioners need to understand the ethical guidelines that accompany these metrics for responsible implementation. Recent work has shown that there is a proliferation of ethics guidelines and has pointed to the need for more implementation guidance rather than principles alone. The wide variety of available metrics, coupled with the lack of accepted standards or shared knowledge in practice leads to a challenging environment for practitioners to navigate. In this position paper, we focus on this widening gap between the research community and practitioners concerning the availability of metrics versus the ability to put them into practice. We address this gap with our current work, which focuses on developing methods to help ML practitioners in their decision-making processes when picking fairness metrics for recommendation and ranking systems. In our iterative design interviews, we have already found that practitioners need both practical and reflective guidance when refining fairness constraints. This is especially salient given the growing challenge for practitioners to leverage the correct metrics while balancing complex fairness contexts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2022

Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits

Recent years have seen the development of many open-source ML fairness t...
research
12/29/2022

Properties of Group Fairness Metrics for Rankings

In recent years, several metrics have been developed for evaluating grou...
research
05/03/2021

Explaining how your AI system is fair

To implement fair machine learning in a sustainable way, choosing the ri...
research
06/10/2023

Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice

An emerging body of research indicates that ineffective cross-functional...
research
05/15/2019

A Human-Centered Approach to Interactive Machine Learning

The interactive machine learning (IML) community aims to augment humans'...
research
09/06/2020

Data Visualization Practitioners' Perspectives on Chartjunk

Chartjunk is a popular yet contentious topic. Previous studies have show...
research
02/16/2021

Towards the Right Kind of Fairness in AI

Fairness is a concept of justice. Various definitions exist, some of the...

Please sign up or login with your details

Forgot password? Click here to reset