ABCinML: Anticipatory Bias Correction in Machine Learning Applications

06/14/2022
by   Abdulaziz A. Almuzaini, et al.
0

The idealization of a static machine-learned model, trained once and deployed forever, is not practical. As input distributions change over time, the model will not only lose accuracy, any constraints to reduce bias against a protected class may fail to work as intended. Thus, researchers have begun to explore ways to maintain algorithmic fairness over time. One line of work focuses on dynamic learning: retraining after each batch, and the other on robust learning which tries to make algorithms robust against all possible future changes. Dynamic learning seeks to reduce biases soon after they have occurred and robust learning often yields (overly) conservative models. We propose an anticipatory dynamic learning approach for correcting the algorithm to mitigate bias before it occurs. Specifically, we make use of anticipations regarding the relative distributions of population subgroups (e.g., relative ratios of male and female applicants) in the next cycle to identify the right parameters for an importance weighing fairness approach. Results from experiments over multiple real-world datasets suggest that this approach has promise for anticipatory bias correction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2022

Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

In recent years, machine learning algorithms have become ubiquitous in a...
research
05/24/2021

Robust Fairness-aware Learning Under Sample Selection Bias

The underlying assumption of many machine learning algorithms is that th...
research
04/14/2021

Can Active Learning Preemptively Mitigate Fairness Issues?

Dataset bias is one of the prevailing causes of unfairness in machine le...
research
06/08/2020

Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides

Algorithmic bias is the systematic preferential or discriminatory treatm...
research
05/05/2022

Optimising Equal Opportunity Fairness in Model Training

Real-world datasets often encode stereotypes and societal biases. Such b...
research
06/27/2022

Prisoners of Their Own Devices: How Models Induce Data Bias in Performative Prediction

The unparalleled ability of machine learning algorithms to learn pattern...
research
02/22/2023

Fairguard: Harness Logic-based Fairness Rules in Smart Cities

Smart cities operate on computational predictive frameworks that collect...

Please sign up or login with your details

Forgot password? Click here to reset