Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis

09/21/2023
by   Anahid Jalali, et al.
0

Post-hoc explainability methods aim to clarify predictions of black-box machine learning models. However, it is still largely unclear how well users comprehend the provided explanations and whether these increase the users ability to predict the model behavior. We approach this question by conducting a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP. Moreover, we investigate the effect of counterfactual explanations and misclassifications on users ability to understand and predict the model behavior. We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model's decision boundary. Furthermore, we find that counterfactual explanations and misclassifications can significantly increase the users understanding of how a machine learning model is making decisions. Based on our findings, we also derive design recommendations for future post-hoc explainability methods with increased comprehensibility and predictability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2019

Minimalistic Explanations: Capturing the Essence of Decisions

The use of complex machine learning models can make systems opaque to us...
research
06/30/2020

Learning Post-Hoc Causal Explanations for Recommendation

State-of-the-art recommender systems have the ability to generate high-q...
research
09/15/2023

Can Users Correctly Interpret Machine Learning Explanations and Simultaneously Identify Their Limitations?

Automated decision-making systems are becoming increasingly ubiquitous, ...
research
12/19/2022

Explaining Classifications to Non Experts: An XAI User Study of Post Hoc Explanations for a Classifier When People Lack Expertise

Very few eXplainable AI (XAI) studies consider how users understanding o...
research
10/16/2021

TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models

Clinicians are often very sceptical about applying automatic image proce...
research
05/23/2017

Towards Interrogating Discriminative Machine Learning Models

It is oftentimes impossible to understand how machine learning models re...
research
05/06/2022

Let's Go to the Alien Zoo: Introducing an Experimental Framework to Study Usability of Counterfactual Explanations for Machine Learning

To foster usefulness and accountability of machine learning (ML), it is ...

Please sign up or login with your details

Forgot password? Click here to reset