Interpretable Data-Based Explanations for Fairness Debugging

12/17/2021
by   Romila Pradhan, et al.
0

A wide variety of fairness metrics and eXplainable Artificial Intelligence (XAI) approaches have been proposed in the literature to identify bias in machine learning models that are used in critical real-life contexts. However, merely reporting on a model's bias, or generating explanations using existing XAI techniques is insufficient to locate and eventually mitigate sources of bias. We introduce Gopher, a system that produces compact, interpretable and causal explanations for bias or unexpected model behavior by identifying coherent subsets of the training data that are root-causes for this behavior. Specifically, we introduce the concept of causal responsibility that quantifies the extent to which intervening on training data by removing or updating subsets of it can resolve the bias. Building on this concept, we develop an efficient approach for generating the top-k patterns that explain model bias that utilizes techniques from the machine learning (ML) community to approximate causal responsibility and uses pruning rules to manage the large search space for patterns. Our experimental evaluation demonstrates the effectiveness of Gopher in generating interpretable explanations for identifying and debugging sources of bias.

READ FULL TEXT
research
03/07/2023

Causal Dependence Plots for Interpretable Machine Learning

Explaining artificial intelligence or machine learning models is an incr...
research
01/31/2022

Causal Explanations and XAI

Although standard Machine Learning models are optimized for making predi...
research
05/31/2018

Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

There has recently been a surge of work in explanatory artificial intell...
research
02/16/2022

Bias and unfairness in machine learning models: a systematic literature review

One of the difficulties of artificial intelligence is to ensure that mod...
research
04/19/2022

GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints

The number of information systems (IS) studies dealing with explainable ...
research
12/10/2020

Investigating Bias in Image Classification using Model Explanations

We evaluated whether model explanations could efficiently detect bias in...
research
11/06/2020

Wasserstein-based fairness interpretability framework for machine learning models

In this article, we introduce a fairness interpretability framework for ...

Please sign up or login with your details

Forgot password? Click here to reset