Fair Forests: Regularized Tree Induction to Minimize Model Bias

12/21/2017
by   Edward Raff, et al.
0

The potential lack of fairness in the outputs of machine learning algorithms has recently gained attention both within the research community as well as in society more broadly. Surprisingly, there is no prior work developing tree-induction algorithms for building fair decision trees or fair random forests. These methods have widespread popularity as they are one of the few to be simultaneously interpretable, non-linear, and easy-to-use. In this paper we develop, to our knowledge, the first technique for the induction of fair decision trees. We show that our "Fair Forest" retains the benefits of the tree-based approach, while providing both greater accuracy and fairness than other alternatives, for both "group fairness" and "individual fairness.'" We also introduce new measures for fairness which are able to handle multinomial and continues attributes as well as regression problems, as opposed to binary attributes and labels only. Finally, we demonstrate a new, more robust evaluation procedure for algorithms that considers the dataset in its entirety rather than only a specific protected attribute.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2020

Fairness-guided SMT-based Rectification of Decision Trees and Random Forests

Data-driven decision making is gaining prominence with the popularity of...
research
11/25/2018

Intersectionality: Multiple Group Fairness in Expectation Constraints

Group fairness is an important concern for machine learning researchers,...
research
09/21/2021

Fairness without Imputation: A Decision Tree Approach for Fair Prediction with Missing Values

We investigate the fairness concerns of training a machine learning mode...
research
10/10/2018

Equality Constrained Decision Trees: For the Algorithmic Enforcement of Group Fairness

Fairness, through its many forms and definitions, has become an importan...
research
10/28/2019

Learning Fair and Interpretable Representations via Linear Orthogonalization

To reduce human error and prejudice, many high-stakes decisions have bee...
research
11/23/2022

Subgroup Robustness Grows On Trees: An Empirical Baseline Investigation

Researchers have proposed many methods for fair and robust machine learn...
research
07/01/2018

Gradient Reversal Against Discrimination

No methods currently exist for making arbitrary neural networks fair. In...

Please sign up or login with your details

Forgot password? Click here to reset