Demographic Parity Inspector: Fairness Audits via the Explanation Space

03/14/2023
by   Carlos Mougan, et al.
0

Even if deployed with the best intentions, machine learning methods can perpetuate, amplify or even create social biases. Measures of (un-)fairness have been proposed as a way to gauge the (non-)discriminatory nature of machine learning models. However, proxies of protected attributes causing discriminatory effects remain challenging to address. In this work, we propose a new algorithmic approach that measures group-wise demographic parity violations and allows us to inspect the causes of inter-group discrimination. Our method relies on the novel idea of measuring the dependence of a model on the protected attribute based on the explanation space, an informative space that allows for more sensitive audits than the primary space of input data or prediction distributions, and allowing for the assertion of theoretical demographic parity auditing guarantees. We provide a mathematical analysis, synthetic examples, and experimental evaluation of real-world data. We release an open-source Python package with methods, routines, and tutorials.

READ FULL TEXT

page 8

page 14

page 15

research
11/13/2020

An example of prediction which complies with Demographic Parity and equalizes group-wise risks in the context of regression

Let (X, S, Y) ∈ℝ^p ×{1, 2}×ℝ be a triplet following some joint distribut...
research
05/15/2022

Fair Bayes-Optimal Classifiers Under Predictive Parity

Increasing concerns about disparate effects of AI have motivated a great...
research
07/06/2023

Through the Fairness Lens: Experimental Analysis and Evaluation of Entity Matching

Entity matching (EM) is a challenging problem studied by different commu...
research
03/14/2023

Explanation Shift: Investigating Interactions between Models and Shifting Data Distributions

As input data distributions evolve, the predictive performance of machin...
research
06/08/2020

Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides

Algorithmic bias is the systematic preferential or discriminatory treatm...
research
05/10/2022

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

Research in machine learning fairness has historically considered a sing...
research
07/28/2023

LUCID-GAN: Conditional Generative Models to Locate Unfairness

Most group fairness notions detect unethical biases by computing statist...

Please sign up or login with your details

Forgot password? Click here to reset