Wasserstein-based fairness interpretability framework for machine learning models

11/06/2020
by   Alexey Miroshnikov, et al.
0

In this article, we introduce a fairness interpretability framework for measuring and explaining bias in classification and regression models at the level of a distribution. In our work, motivated by the ideas of Dwork et al. (2012), we measure the model bias across sub-population distributions using the Wasserstein metric. The transport theory characterization of the Wasserstein metric allows us to take into account the sign of the bias across the model distribution which in turn yields the decomposition of the model bias into positive and negative components. To understand how predictors contribute to the model bias, we introduce and theoretically characterize bias predictor attributions called bias explanations. We also provide the formulation for the bias explanations that take into account the impact of missing values. In addition, motivated by the works of Strumbelj and Kononenko (2014) and Lundberg and Lee (2017) we construct additive bias explanations by employing cooperative game theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2021

Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics

This article is a companion paper to our earlier work Miroshnikov et al....
research
02/24/2021

Directional Bias Amplification

Mitigating bias in machine learning systems requires refining our unders...
research
12/10/2020

Investigating Bias in Image Classification using Model Explanations

We evaluated whether model explanations could efficiently detect bias in...
research
03/05/2020

Regularized Variational Data Assimilation for Bias Treatment using the Wasserstein Metric

This paper presents a new variational data assimilation (VDA) approach f...
research
12/17/2021

Interpretable Data-Based Explanations for Fairness Debugging

A wide variety of fairness metrics and eXplainable Artificial Intelligen...
research
02/22/2021

Mutual information-based group explainers with coalition structure for machine learning model explanations

In this article, we propose and investigate ML group explainers in a gen...
research
06/03/2020

Debiased Sinkhorn barycenters

Entropy regularization in optimal transport (OT) has been the driver of ...

Please sign up or login with your details

Forgot password? Click here to reset