FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency

09/11/2019
by   Kacper Sokol, et al.
32

Machine learning algorithms can take important decisions, sometimes legally binding, about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, qualities such as fairness, accountability and transparency of predictive systems are of paramount importance. Recent literature suggested voluntary self-reporting on these aspects of predictive systems – e.g., data sheets for data sets – but their scope is often limited to a single component of a machine learning pipeline, and producing them requires manual labour. To resolve this impasse and ensure high-quality, fair, transparent and reliable machine learning systems, we developed an open source toolbox that can inspect selected fairness, accountability and transparency aspects of these systems to automatically and objectively report them back to their engineers and users. We describe design, scope and usage examples of this Python toolbox in this paper. The toolbox provides functionality for inspecting fairness, accountability and transparency of all aspects of the machine learning process: data (and their features), models and predictions. It is available to the public under the BSD 3-Clause open source licence.

READ FULL TEXT

page 6

page 7

page 8

research
09/08/2022

FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems

Predictive systems, in particular machine learning algorithms, can take ...
research
06/20/2022

The Right Tool for the Job: Open-Source Auditing Tools in Machine Learning

In recent years, discussions about fairness in machine learning, AI ethi...
research
02/24/2021

3D4ALL: Toward an Inclusive Pipeline to Classify 3D Contents

Algorithmic content moderation manages an explosive number of user-creat...
research
09/25/2018

A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees

Machine learning algorithms aim at minimizing the number of false decisi...
research
04/15/2019

Tutorial: Safe and Reliable Machine Learning

This document serves as a brief overview of the "Safe and Reliable Machi...
research
02/03/2018

Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

Calls for heightened consideration of fairness and accountability in alg...
research
11/27/2020

Reducing Discrimination in Learning Algorithms for Social Good in Sociotechnical Systems

Sociotechnical systems within cities are now equipped with machine learn...

Please sign up or login with your details

Forgot password? Click here to reset