Do not explain without context: addressing the blind spot of model explanations

05/28/2021
by   Katarzyna Woznica, et al.
0

The increasing number of regulations and expectations of predictive machine learning models, such as so called right to explanation, has led to a large number of methods promising greater interpretability. High demand has led to a widespread adoption of XAI techniques like Shapley values, Partial Dependence profiles or permutational variable importance. However, we still do not know enough about their properties and how they manifest in the context in which explanations are created by analysts, reviewed by auditors, and interpreted by various stakeholders. This paper highlights a blind spot which, although critical, is often overlooked when monitoring and auditing machine learning models: the effect of the reference data on the explanation calculation. We discuss that many model explanations depend directly or indirectly on the choice of the referenced data distribution. We showcase examples where small changes in the distribution lead to drastic changes in the explanations, such as a change in trend or, alarmingly, a conclusion. Consequently, we postulate that obtaining robust and useful explanations always requires supporting them with a broader context.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2021

Fooling Partial Dependence via Data Poisoning

Many methods have been developed to understand complex predictive models...
research
06/26/2023

Challenges and Opportunities of Shapley values in a Clinical Context

With the adoption of machine learning-based solutions in routine clinica...
research
03/07/2022

Robustness and Usefulness in AI Explanation Methods

Explainability in machine learning has become incredibly important as ma...
research
11/04/2018

Explaining Explanations in AI

Recent work on interpretability in machine learning and AI has focused o...
research
09/15/2022

Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP

Context: The identification of bugs within the reported issues in an iss...
research
09/07/2020

Representativity and Consistency Measures for Deep Neural Network Explanations

The adoption of machine learning in critical contexts requires a reliabl...
research
04/04/2019

A Categorisation of Post-hoc Explanations for Predictive Models

The ubiquity of machine learning based predictive models in modern socie...

Please sign up or login with your details

Forgot password? Click here to reset