LiFT: A Scalable Framework for Measuring Fairness in ML Applications

08/14/2020
by   Sriram Vasudevan, et al.
23

Many internet applications are powered by machine learned models, which are usually trained on labeled datasets obtained through either implicit / explicit user feedback signals or human judgments. Since societal biases may be present in the generation of such datasets, it is possible for the trained models to be biased, thereby resulting in potential discrimination and harms for disadvantaged groups. Motivated by the need for understanding and addressing algorithmic bias in web-scale ML systems and the limitations of existing fairness toolkits, we present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems. We highlight the key requirements in deployed settings, and present the design of our fairness measurement system. We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn. Finally, we provide open problems based on practical experience.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2021

BeFair: Addressing Fairness in the Banking Sector

Algorithmic bias mitigation has been one of the most difficult conundrum...
research
06/28/2021

Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics

Measuring bias is key for better understanding and addressing unfairness...
research
01/17/2022

Visual Identification of Problematic Bias in Large Label Spaces

While the need for well-trained, fair ML systems is increasing ever more...
research
09/07/2021

Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud

Understanding the predictions made by machine learning (ML) models and t...
research
01/15/2022

Finding Label and Model Errors in Perception Data With Learned Observation Assertions

ML is being deployed in complex, real-world scenarios where errors have ...
research
04/21/2022

A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms

Motivated by the growing importance of reducing unfairness in ML predict...
research
07/30/2020

Fairness-Aware Online Personalization

Decision making in crucial applications such as lending, hiring, and col...

Please sign up or login with your details

Forgot password? Click here to reset