Measuring Unfairness through Game-Theoretic Interpretability

10/12/2019
by   Juliana Cesaro, et al.
0

One often finds in the literature connections between measures of fairness and measures of feature importance employed to interpret trained classifiers. However, there seems to be no study that compares fairness measures and feature importance measures. In this paper we propose ways to evaluate and compare such measures. We focus in particular on SHAP, a game-theoretic measure of feature importance; we present results for a number of unfairness-prone datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2021

Gradual (In)Compatibility of Fairness Criteria

Impossibility results show that important fairness measures (independenc...
research
06/01/2021

Information Theoretic Measures for Fairness-aware Feature Selection

Machine learning algorithms are increasingly used for consequential deci...
research
02/13/2018

A comparative study of fairness-enhancing interventions in machine learning

Computers are increasingly used to make decisions that have significant ...
research
04/01/2020

Understanding Global Feature Contributions Through Additive Importance Measures

Understanding the inner workings of complex machine learning models is a...
research
10/22/2022

Abstract Interpretation-Based Feature Importance for SVMs

We propose a symbolic representation for support vector machines (SVMs) ...
research
10/01/2019

Randomized Ablation Feature Importance

Given a model f that predicts a target y from a vector of input features...
research
03/10/2023

Feature Importance: A Closer Look at Shapley Values and LOCO

There is much interest lately in explainability in statistics and machin...

Please sign up or login with your details

Forgot password? Click here to reset