A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms

04/21/2022
by   Nil-Jana Akpinar, et al.
0

Motivated by the growing importance of reducing unfairness in ML predictions, Fair-ML researchers have presented an extensive suite of algorithmic "fairness-enhancing" remedies. Most existing algorithms, however, are agnostic to the sources of the observed unfairness. As a result, the literature currently lacks guiding frameworks to specify conditions under which each algorithmic intervention can potentially alleviate the underpinning cause of unfairness. To close this gap, we scrutinize the underlying biases (e.g., in the training data or design choices) that cause observational unfairness. We present a bias-injection sandbox tool to investigate fairness consequences of various biases and assess the effectiveness of algorithmic remedies in the presence of specific types of bias. We call this process the bias(stress)-testing of algorithmic interventions. Unlike existing toolkits, ours provides a controlled environment to counterfactually inject biases in the ML pipeline. This stylized setup offers the distinct capability of testing fairness interventions beyond observational data and against an unbiased benchmark. In particular, we can test whether a given remedy can alleviate the injected bias by comparing the predictions resulting after the intervention in the biased setting with true labels in the unbiased regime – that is, before any bias injection. We illustrate the utility of our toolkit via a proof-of-concept case study on synthetic data. Our empirical analysis showcases the type of insights that can be obtained through our simulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/06/2020

There is no trade-off: enforcing fairness can improve accuracy

One of the main barriers to the broader adoption of algorithmic fairness...
research
02/03/2021

BeFair: Addressing Fairness in the Banking Sector

Algorithmic bias mitigation has been one of the most difficult conundrum...
research
11/24/2019

Algorithmic Bias in Recidivism Prediction: A Causal Perspective

ProPublica's analysis of recidivism predictions produced by Correctional...
research
08/14/2020

LiFT: A Scalable Framework for Measuring Fairness in ML Applications

Many internet applications are powered by machine learned models, which ...
research
05/01/2022

Domain Adaptation meets Individual Fairness. And they get along

Many instances of algorithmic bias are caused by distributional shifts. ...
research
11/16/2022

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

Auditing machine learning-based (ML) healthcare tools for bias is critic...
research
02/22/2023

Fairguard: Harness Logic-based Fairness Rules in Smart Cities

Smart cities operate on computational predictive frameworks that collect...

Please sign up or login with your details

Forgot password? Click here to reset