DeepAI AI Chat
Log In Sign Up

Fairness in Supervised Learning: An Information Theoretic Approach

01/13/2018
by   AmirEmad Ghassami, et al.
University of Illinois at Urbana-Champaign
0

Automated decision making systems are increasingly being used in real-world applications. In these systems for the most part, the decision rules are derived by minimizing the training error on the available historical data. Therefore, if there is a bias related to a sensitive attribute such as gender, race, religion, etc. in the data, say, due to cultural/historical discriminatory practices against a certain demographic, the system could continue discrimination in decisions by including the said bias in its decision rule. We present an information theoretic framework for designing fair predictors from data, which aim to prevent discrimination against a specified sensitive attribute in a supervised learning setting. We use equalized odds as the criterion for discrimination, which demands that the prediction should be independent of the protected attribute conditioned on the actual label. To ensure fairness and generalization simultaneously, we compress the data to an auxiliary variable, which is used for the prediction task. This auxiliary variable is chosen such that it is decontaminated from the discriminatory attribute in the sense of equalized odds. The final predictor is obtained by applying a Bayesian decision rule to the auxiliary variable.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/27/2019

Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

As virtually all aspects of our lives are increasingly impacted by algor...
10/18/2021

Fair Tree Learning

When dealing with sensitive data in automated data-driven decision-makin...
01/16/2018

On the Direction of Discrimination: An Information-Theoretic Analysis of Disparate Impact in Machine Learning

In the context of machine learning, disparate impact refers to a form of...
12/17/2019

Supervised learning algorithms resilient to discriminatory data perturbations

The actions of individuals can be discriminatory with respect to certain...
11/12/2022

RISE: Robust Individualized Decision Learning with Sensitive Variables

This paper introduces RISE, a robust individualized decision learning fr...
02/26/2020

DeBayes: a Bayesian method for debiasing network embeddings

As machine learning algorithms are increasingly deployed for high-impact...
02/12/2020

To Split or Not to Split: The Impact of Disparate Treatment in Classification

Disparate treatment occurs when a machine learning model produces differ...