Bayesian Modeling of Intersectional Fairness: The Variance of Bias

11/18/2018
by   James Foulds, et al.
0

Intersectionality is a framework that analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including race, gender, sexual orientation, class, and disability. Intersectionality theory therefore implies it is important that fairness in artificial intelligence systems be protected with regard to multi-dimensional protected attributes. However, the measurement of fairness becomes statistically challenging in the multi-dimensional setting due to data sparsity, which increases rapidly in the number of dimensions, and in the values per dimension. We present a Bayesian probabilistic modeling approach for the reliable, data-efficient estimation of fairness with multi-dimensional protected attributes, which we apply to novel intersectional fairness metrics. Experimental results on census data and the COMPAS criminal justice recidivism dataset demonstrate the utility of our methodology, and show that Bayesian methods are valuable for the modeling and measurement of fairness in an intersectional context.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2018

An Intersectional Definition of Fairness

We introduce a measure of fairness for algorithms and data with regard t...
research
02/12/2023

Multi-dimensional discrimination in Law and Machine Learning – A comparative overview

AI-driven decision-making can lead to discrimination against certain ind...
research
06/05/2023

Towards Fairness in Personalized Ads Using Impression Variance Aware Reinforcement Learning

Variances in ad impression outcomes across demographic groups are increa...
research
11/03/2022

Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity

Existing regulations prohibit model developers from accessing protected ...
research
05/29/2023

Generalized Disparate Impact for Configurable Fairness Solutions in ML

We make two contributions in the field of AI fairness over continuous pr...
research
06/12/2022

Bounding and Approximating Intersectional Fairness through Marginal Fairness

Discrimination in machine learning often arises along multiple dimension...
research
06/20/2022

Square One Bias in NLP: Towards a Multi-Dimensional Exploration of the Research Manifold

The prototypical NLP experiment trains a standard architecture on labele...

Please sign up or login with your details

Forgot password? Click here to reset