Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making

03/25/2019
by   Sina Aghaei, et al.
0

In recent years, automated data-driven decision-making systems have enjoyed a tremendous success in a variety of fields (e.g., to make product recommendations, or to guide the production of entertainment). More recently, these algorithms are increasingly being used to assist socially sensitive decision-making (e.g., to decide who to admit into a degree program or to prioritize individuals for public housing). Yet, these automated tools may result in discriminative decision-making in the sense that they may treat individuals unfairly or unequally based on membership to a category or a minority, resulting in disparate treatment or disparate impact and violating both moral and ethical standards. This may happen when the training dataset is itself biased (e.g., if individuals belonging to a particular group have historically been discriminated upon). However, it may also happen when the training dataset is unbiased, if the errors made by the system affect individuals belonging to a category or minority differently (e.g., if misclassification rates for Blacks are higher than for Whites). In this paper, we unify the definitions of unfairness across classification and regression. We propose a versatile mixed-integer optimization framework for learning optimal and fair decision trees and variants thereof to prevent disparate treatment and/or disparate impact as appropriate. This translates to a flexible schema for designing fair and interpretable policies suitable for socially sensitive decision-making. We conduct extensive computational studies that show that our framework improves the state-of-the-art in the field (which typically relies on heuristics) to yield non-discriminative decisions at lower cost to overall accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2020

Fairness-guided SMT-based Rectification of Decision Trees and Random Forests

Data-driven decision making is gaining prominence with the popularity of...
research
11/29/2017

Paradoxes in Fair Computer-Aided Decision Making

Computer-aided decision making, where some classifier (e.g., an algorith...
research
02/26/2020

DeBayes: a Bayesian method for debiasing network embeddings

As machine learning algorithms are increasingly deployed for high-impact...
research
10/08/2020

A survey of algorithmic recourse: definitions, formulations, solutions, and prospects

Machine learning is increasingly used to inform decision-making in sensi...
research
10/26/2016

Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

Automated data-driven decision making systems are increasingly being use...
research
02/21/2019

Stable and Fair Classification

Fair classification has been a topic of intense study in machine learnin...
research
08/09/2023

Automated Driving Without Ethics: Meaning, Design and Real-World Implementation

The ethics of automated vehicles (AV) has received a great amount of att...

Please sign up or login with your details

Forgot password? Click here to reset