Auditing for Spatial Fairness

02/23/2023
by   Dimitris Sacharidis, et al.
0

This paper studies algorithmic fairness when the protected attribute is location. To handle protected attributes that are continuous, such as age or income, the standard approach is to discretize the domain into predefined groups, and compare algorithmic outcomes across groups. However, applying this idea to location raises concerns of gerrymandering and may introduce statistical bias. Prior work addresses these concerns but only for regularly spaced locations, while raising other issues, most notably its inability to discern regions that are likely to exhibit spatial unfairness. Similar to established notions of algorithmic fairness, we define spatial fairness as the statistical independence of outcomes from location. This translates into requiring that for each region of space, the distribution of outcomes is identical inside and outside the region. To allow for localized discrepancies in the distribution of outcomes, we compare how well two competing hypotheses explain the observed outcomes. The null hypothesis assumes spatial fairness, while the alternate allows different distributions inside and outside regions. Their goodness of fit is then assessed by a likelihood ratio test. If there is no significant difference in how well the two hypotheses explain the observed outcomes, we conclude that the algorithm is spatially fair.

READ FULL TEXT
research
07/28/2022

Multiple Attribute Fairness: Application to Fraud Detection

We propose a fairness measure relaxing the equality conditions in the po...
research
05/20/2022

The Fairness of Credit Scoring Models

In credit markets, screening algorithms aim to discriminate between good...
research
07/06/2023

When Fair Classification Meets Noisy Protected Attributes

The operationalization of algorithmic fairness comes with several practi...
research
11/27/2020

Black Loans Matter: Distributionally Robust Fairness for Fighting Subgroup Discrimination

Algorithmic fairness in lending today relies on group fairness metrics f...
research
04/03/2019

Preference-Informed Fairness

As algorithms are increasingly used to make important decisions pertaini...
research
07/02/2018

A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices

Discrimination via algorithmic decision making has received considerable...
research
01/16/2020

Fairness Measures for Regression via Probabilistic Classification

Algorithmic fairness involves expressing notions such as equity, or reas...

Please sign up or login with your details

Forgot password? Click here to reset