FairCanary: Rapid Continuous Explainable Fairness

06/13/2021
by   Avijit Ghosh, et al.
10

Machine Learning (ML) models are being used in all facets of today's society to make high stake decisions like bail granting or credit lending, with very minimal regulations. Such systems are extremely vulnerable to both propagating and amplifying social biases, and have therefore been subject to growing research interest. One of the main issues with conventional fairness metrics is their narrow definitions which hide the complete extent of the bias by focusing primarily on positive and/or negative outcomes, whilst not paying attention to the overall distributional shape. Moreover, these metrics are often contradictory to each other, are severely restrained by the contextual and legal landscape of the problem, have technical constraints like poor support for continuous outputs, the requirement of class labels, and are not explainable. In this paper, we present Quantile Demographic Drift, which addresses the shortcomings mentioned above. This metric can also be used to measure intra-group privilege. It is easily interpretable via existing attribution techniques, and also extends naturally to individual fairness via the principle of like-for-like comparison. We make this new fairness score the basis of a new system that is designed to detect bias in production ML models without the need for labels. We call the system FairCanary because of its capability to detect bias in a live deployed model and narrow down the alert to the responsible set of features, like the proverbial canary in a coal mine.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

05/10/2021

Improving Fairness of AI Systems with Lossless De-biasing

In today's society, AI systems are increasingly used to make critical de...
03/13/2021

OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning

Machine learning (ML) is increasingly being used to make decisions in ou...
12/17/2019

Human Comprehension of Fairness in Machine Learning

Bias in machine learning has manifested injustice in several areas, such...
09/20/2021

Algorithmic Fairness Verification with Graphical Models

In recent years, machine learning (ML) algorithms have been deployed in ...
11/19/2018

Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions

A recent flurry of research activity has attempted to quantitatively def...
01/17/2022

Visual Identification of Problematic Bias in Large Label Spaces

While the need for well-trained, fair ML systems is increasing ever more...
02/03/2022

Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics

The harmful impacts of algorithmic decision systems have recently come i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.