Fair Tree Learning

10/18/2021
by   António Pereira Barata, et al.
0

When dealing with sensitive data in automated data-driven decision-making, an important concern is to learn predictors with high performance towards a class label, whilst minimising for the discrimination towards some sensitive attribute, like gender or race, induced from biased data. Various hybrid optimisation criteria exist which combine classification performance with a fairness metric. However, while the threshold-free ROC-AUC is the standard for measuring traditional classification model performance, current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric. Moreover, current tree learning frameworks do not allow for fair treatment with respect to multiple categories or multiple sensitive attributes. Lastly, the end-users of a fair model should be able to balance fairness and classification performance according to their specific ethical, legal, and societal needs. In this paper we address these shortcomings by proposing a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF – Splitting Criterion AUC for Fairness – towards fair decision tree learning, which extends to bagged and boosted frameworks. Compared to the state-of-the-art, our method provides three main advantages: (1) classifier performance and fairness are defined continuously instead of relying upon an, often arbitrary, decision threshold; (2) it leverages multiple sensitive attributes simultaneously, of which the values may be multicategorical; and (3) the unavoidable performance-fairness trade-off is tunable during learning. In our experiments, we demonstrate how SCAFF attains high predictive performance towards the class label and low discrimination with respect to binary, multicategorical, and multiple sensitive attributes, further substantiating our claims.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2019

FAHT: An Adaptive Fairness-aware Decision Tree Classifier

Automated data-driven decision-making systems are ubiquitous across a wi...
research
10/15/2020

Online Decision Trees with Fairness

While artificial intelligence (AI)-based decision-making systems are inc...
research
01/13/2018

Fairness in Supervised Learning: An Information Theoretic Approach

Automated decision making systems are increasingly being used in real-wo...
research
07/11/2022

How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies

With the introduction of machine learning in high-stakes decision making...
research
09/12/2023

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

In the standard use case of Algorithmic Fairness, the goal is to elimina...
research
02/26/2020

DeBayes: a Bayesian method for debiasing network embeddings

As machine learning algorithms are increasingly deployed for high-impact...
research
09/26/2020

Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach

A critical concern in data-driven decision making is to build models who...

Please sign up or login with your details

Forgot password? Click here to reset