Can we Agree? On the Rashōmon Effect and the Reliability of Post-Hoc Explainable AI

08/14/2023
by   Clement Poiret, et al.
0

The Rashōmon effect poses challenges for deriving reliable knowledge from machine learning models. This study examined the influence of sample size on explanations from models in a Rashōmon set using SHAP. Experiments on 5 public datasets showed that explanations gradually converged as the sample size increased. Explanations from <128 samples exhibited high variability, limiting reliable knowledge extraction. However, agreement between models improved with more data, allowing for consensus. Bagging ensembles often had higher agreement. The results provide guidance on sufficient data to trust explanations. Variability at low samples suggests that conclusions may be unreliable without validation. Further work is needed with more model types, data domains, and explanation methods. Testing convergence in neural networks and with model-specific explanation methods would be impactful. The approaches explored here point towards principled techniques for eliciting knowledge from ambiguous models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2020

Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition

Explainable machine learning and artificial intelligence models have bee...
research
12/10/2022

Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification

Some recent works observed the instability of post-hoc explanations when...
research
06/26/2021

Explanatory Pluralism in Explainable AI

The increasingly widespread application of AI models motivates increased...
research
09/27/2020

Measure Utility, Gain Trust: Practical Advice for XAI Researcher

Research into the explanation of machine learning models, i.e., explaina...
research
04/24/2022

An empirical study of the effect of background data size on the stability of SHapley Additive exPlanations (SHAP) for deep learning models

Nowadays, the interpretation of why a machine learning (ML) model makes ...
research
04/26/2021

Exploiting Explanations for Model Inversion Attacks

The successful deployment of artificial intelligence (AI) in many domain...
research
04/12/2023

Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks

Explainable AI has become a popular tool for validating machine learning...

Please sign up or login with your details

Forgot password? Click here to reset