Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns

06/10/2019
by   YooJung Choi, et al.
2

As machine learning is increasingly used to make real-world decisions, recent research efforts aim to define and ensure fairness in algorithmic decision making. Existing methods often assume a fixed set of observable features to define individuals, but lack a discussion of certain features not being observed at test time. In this paper, we study fairness of naive Bayes classifiers, which allow partial observations. In particular, we introduce the notion of a discrimination pattern, which refers to an individual receiving different classifications depending on whether some sensitive attributes were observed. Then a model is considered fair if it has no such pattern. We propose an algorithm to discover and mine for discrimination patterns in a naive Bayes classifier, and show how to learn maximum-likelihood parameters subject to these fairness constraints. Our approach iteratively discovers and eliminates discrimination patterns until a fair model is learned. An empirical evaluation on three real-world datasets demonstrates that we can remove exponentially many discrimination patterns by only adding a small fraction of them as constraints.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/12/2018

Eliminating Latent Discrimination: Train Then Mask

How can we control for latent discrimination in predictive models? How c...
04/29/2021

You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features

Though machine learning models are achieving great success, ex-tensive s...
05/06/2020

Ensuring Fairness under Prior Probability Shifts

In this paper, we study the problem of fair classification in the presen...
03/18/2019

Multi-Differential Fairness Auditor for Black Box Classifiers

Machine learning algorithms are increasingly involved in sensitive decis...
12/10/2017

Fairness in Machine Learning: Lessons from Political Philosophy

What does it mean for a machine learning model to be `fair', in terms wh...
11/22/2021

A Semi-Supervised Adaptive Discriminative Discretization Method Improving Discrimination Power of Regularized Naive Bayes

Recently, many improved naive Bayes methods have been developed with enh...
10/13/2017

Two-stage Algorithm for Fairness-aware Machine Learning

Algorithmic decision making process now affects many aspects of our live...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.