Black Loans Matter: Distributionally Robust Fairness for Fighting Subgroup Discrimination

11/27/2020
by   Mark Weber, et al.
0

Algorithmic fairness in lending today relies on group fairness metrics for monitoring statistical parity across protected groups. This approach is vulnerable to subgroup discrimination by proxy, carrying significant risks of legal and reputational damage for lenders and blatantly unfair outcomes for borrowers. Practical challenges arise from the many possible combinations and subsets of protected groups. We motivate this problem against the backdrop of historical and residual racism in the United States polluting all available training data and raising public sensitivity to algorithimic bias. We review the current regulatory compliance protocols for fairness in lending and discuss their limitations relative to the contributions state-of-the-art fairness methods may afford. We propose a solution for addressing subgroup discrimination, while adhering to existing group fairness requirements, from recent developments in individual fairness methods and corresponding fair metric learning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2018

Proxy Fairness

We consider the problem of improving fairness when one lacks access to a...
research
09/02/2022

A Discussion of Discrimination and Fairness in Insurance Pricing

Indirect discrimination is an issue of major concern in algorithmic mode...
research
03/10/2020

Addressing multiple metrics of group fairness in data-driven decision making

The Fairness, Accountability, and Transparency in Machine Learning (FAT-...
research
03/16/2022

Measuring Fairness of Text Classifiers via Prediction Sensitivity

With the rapid growth in language processing applications, fairness has ...
research
07/17/2023

Certifying the Fairness of KNN in the Presence of Dataset Bias

We propose a method for certifying the fairness of the classification re...
research
01/27/2023

Aleatoric and Epistemic Discrimination in Classification

Machine learning (ML) models can underperform on certain population grou...
research
02/23/2023

Auditing for Spatial Fairness

This paper studies algorithmic fairness when the protected attribute is ...

Please sign up or login with your details

Forgot password? Click here to reset