Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models

12/17/2020
by   Brittany Johnson, et al.
0

Modern software relies heavily on data and machine learning, and affects decisions that shape our world. Unfortunately, recent studies have shown that because of biases in data, software systems frequently inject bias into their decisions, from producing better closed caption transcriptions of men's voices than of women's voices to overcharging people of color for financial loans. To address bias in machine learning, data scientists need tools that help them understand the trade-offs between model quality and fairness in their specific data domains. Toward that end, we present fairkit-learn, a toolkit for helping data scientists reason about and understand fairness. Fairkit-learn works with state-of-the-art machine learning tools and uses the same interfaces to ease adoption. It can evaluate thousands of models produced by multiple machine learning algorithms, hyperparameters, and data permutations, and compute and visualize a small Pareto-optimal set of models that describe the optimal trade-offs between fairness and quality. We evaluate fairkit-learn via a user study with 54 students, showing that students using fairkit-learn produce models that provide a better balance between fairness and quality than students using scikit-learn and IBM AI Fairness 360 toolkits. With fairkit-learn, users can select models that are up to 67 models they are likely to train with scikit-learn.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2022

Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning Fairness?

As machine learning (ML) systems get adopted in more critical areas, it ...
research
02/07/2023

To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods

The right to be forgotten (RTBF) is motivated by the desire of people no...
research
08/07/2023

My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning

Machine learning technology has become ubiquitous, but, unfortunately, o...
research
09/14/2022

Accuracy, Fairness, and Interpretability of Machine Learning Criminal Recidivism Models

Criminal recidivism models are tools that have gained widespread adoptio...
research
09/10/2020

Prune Responsibly

Irrespective of the specific definition of fairness in a machine learnin...
research
10/22/2020

Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation

Recently, there have been increasing calls for computer science curricul...
research
10/11/2022

Navigating Ensemble Configurations for Algorithmic Fairness

Bias mitigators can improve algorithmic fairness in machine learning mod...

Please sign up or login with your details

Forgot password? Click here to reset