Marrying Fairness and Explainability in Supervised Learning

04/06/2022
by   Przemyslaw Grabowicz, et al.
0

Machine learning algorithms that aid human decision-making may inadvertently discriminate against certain protected groups. We formalize direct discrimination as a direct causal effect of the protected attributes on the decisions, while induced discrimination as a change in the causal influence of non-protected features associated with the protected attributes. The measurements of marginal direct effect (MDE) and SHapley Additive exPlanations (SHAP) reveal that state-of-the-art fair learning methods can induce discrimination via association or reverse discrimination in synthetic and real-world datasets. To inhibit discrimination in algorithmic systems, we propose to nullify the influence of the protected attribute on the output of the system, while preserving the influence of remaining features. We introduce and study post-processing methods achieving such objectives, finding that they yield relatively high model accuracy, prevent direct discrimination, and diminishes various disparity measures, e.g., demographic disparity.

READ FULL TEXT
research
12/17/2019

Supervised learning algorithms resilient to discriminatory data perturbations

The actions of individuals can be discriminatory with respect to certain...
research
03/27/2019

Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

As virtually all aspects of our lives are increasingly impacted by algor...
research
10/02/2015

Exposing the Probabilistic Causal Structure of Discrimination

Discrimination discovery from data is an important task aiming at identi...
research
07/24/2023

Causal Fair Machine Learning via Rank-Preserving Interventional Distributions

A decision can be defined as fair if equal individuals are treated equal...
research
07/15/2021

Auditing for Diversity using Representative Examples

Assessing the diversity of a dataset of information associated with peop...
research
09/23/2020

Unfairness Discovery and Prevention For Few-Shot Regression

We study fairness in supervised few-shot meta-learning models that are s...
research
06/08/2017

Avoiding Discrimination through Causal Reasoning

Recent work on fairness in machine learning has focused on various stati...

Please sign up or login with your details

Forgot password? Click here to reset