Fairness in Supervised Learning: An Information Theoretic Approach

01/13/2018
by   AmirEmad Ghassami, et al.
0

Automated decision making systems are increasingly being used in real-world applications. In these systems for the most part, the decision rules are derived by minimizing the training error on the available historical data. Therefore, if there is a bias related to a sensitive attribute such as gender, race, religion, etc. in the data, say, due to cultural/historical discriminatory practices against a certain demographic, the system could continue discrimination in decisions by including the said bias in its decision rule. We present an information theoretic framework for designing fair predictors from data, which aim to prevent discrimination against a specified sensitive attribute in a supervised learning setting. We use equalized odds as the criterion for discrimination, which demands that the prediction should be independent of the protected attribute conditioned on the actual label. To ensure fairness and generalization simultaneously, we compress the data to an auxiliary variable, which is used for the prediction task. This auxiliary variable is chosen such that it is decontaminated from the discriminatory attribute in the sense of equalized odds. The final predictor is obtained by applying a Bayesian decision rule to the auxiliary variable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2019

Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

As virtually all aspects of our lives are increasingly impacted by algor...
research
10/18/2021

Fair Tree Learning

When dealing with sensitive data in automated data-driven decision-makin...
research
01/16/2018

On the Direction of Discrimination: An Information-Theoretic Analysis of Disparate Impact in Machine Learning

In the context of machine learning, disparate impact refers to a form of...
research
12/17/2019

Supervised learning algorithms resilient to discriminatory data perturbations

The actions of individuals can be discriminatory with respect to certain...
research
11/12/2022

RISE: Robust Individualized Decision Learning with Sensitive Variables

This paper introduces RISE, a robust individualized decision learning fr...
research
10/16/2017

Fair Kernel Learning

New social and economic activities massively exploit big data and machin...
research
06/13/2023

Learning under Selective Labels with Data from Heterogeneous Decision-makers: An Instrumental Variable Approach

We study the problem of learning with selectively labeled data, which ar...

Please sign up or login with your details

Forgot password? Click here to reset