Learning Fair and Interpretable Representations via Linear Orthogonalization

10/28/2019
by   Yuzi He, et al.
15

To reduce human error and prejudice, many high-stakes decisions have been turned over to machine algorithms. However, recent research suggests that this does not remove discrimination, and can perpetuate harmful stereotypes. While algorithms have been developed to improve fairness, they typically face at least one of three shortcomings: they are not interpretable, they lose significant accuracy compared to unbiased equivalents, or they are not transferable across models. To address these issues, we propose a geometric method that removes correlations between data and any number of protected variables. Further, we can control the strength of debiasing through an adjustable parameter to address the trade-off between model accuracy and fairness. The resulting features are interpretable and can be used with many popular models, such as linear regression, random forest and multilayer perceptrons. The resulting predictions are found to be more accurate and fair than several comparable fair AI algorithms across a variety of benchmark datasets. Our work shows that debiasing data is a simple and effective solution toward improving fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2022

Learning Optimal Fair Classification Trees

The increasing use of machine learning in high-stakes domains – where pe...
research
02/21/2020

Learning Fairness-aware Relational Structures

The development of fair machine learning models that effectively avert b...
research
05/08/2020

In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction

In recent years, academics and investigative journalists have criticized...
research
12/21/2017

Fair Forests: Regularized Tree Induction to Minimize Model Bias

The potential lack of fairness in the outputs of machine learning algori...
research
06/18/2022

Fair Generalized Linear Models with a Convex Penalty

Despite recent advances in algorithmic fairness, methodologies for achie...
research
06/30/2021

Unaware Fairness: Hierarchical Random Forest for Protected Classes

Procedural fairness has been a public concern, which leads to controvers...
research
10/15/2018

Neural Styling for Interpretable Fair Representations

We observe a rapid increase in machine learning models for learning data...

Please sign up or login with your details

Forgot password? Click here to reset