Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models

by   Brittany Johnson, et al.

Modern software relies heavily on data and machine learning, and affects decisions that shape our world. Unfortunately, recent studies have shown that because of biases in data, software systems frequently inject bias into their decisions, from producing better closed caption transcriptions of men's voices than of women's voices to overcharging people of color for financial loans. To address bias in machine learning, data scientists need tools that help them understand the trade-offs between model quality and fairness in their specific data domains. Toward that end, we present fairkit-learn, a toolkit for helping data scientists reason about and understand fairness. Fairkit-learn works with state-of-the-art machine learning tools and uses the same interfaces to ease adoption. It can evaluate thousands of models produced by multiple machine learning algorithms, hyperparameters, and data permutations, and compute and visualize a small Pareto-optimal set of models that describe the optimal trade-offs between fairness and quality. We evaluate fairkit-learn via a user study with 54 students, showing that students using fairkit-learn produce models that provide a better balance between fairness and quality than students using scikit-learn and IBM AI Fairness 360 toolkits. With fairkit-learn, users can select models that are up to 67 models they are likely to train with scikit-learn.


page 1

page 2

page 3

page 4


Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning Fairness?

As machine learning (ML) systems get adopted in more critical areas, it ...

To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods

The right to be forgotten (RTBF) is motivated by the desire of people no...

My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning

Machine learning technology has become ubiquitous, but, unfortunately, o...

Accuracy, Fairness, and Interpretability of Machine Learning Criminal Recidivism Models

Criminal recidivism models are tools that have gained widespread adoptio...

Prune Responsibly

Irrespective of the specific definition of fairness in a machine learnin...

Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation

Recently, there have been increasing calls for computer science curricul...

Navigating Ensemble Configurations for Algorithmic Fairness

Bias mitigators can improve algorithmic fairness in machine learning mod...

Please sign up or login with your details

Forgot password? Click here to reset