DeepAI AI Chat
Log In Sign Up

Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?

05/04/2020
by   Peter Hase, et al.
University of North Carolina at Chapel Hill
98

Algorithmic approaches to interpreting machine learning models have proliferated in recent years. We carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability, while avoiding important confounding experimental factors. A model is simulatable when a person can predict its behavior on new inputs. Through two kinds of simulation tests involving text and tabular data, we evaluate five explanations methods: (1) LIME, (2) Anchor, (3) Decision Boundary, (4) a Prototype model, and (5) a Composite approach that combines explanations from each method. Clear evidence of method effectiveness is found in very few cases: LIME improves simulatability in tabular classification, and our Prototype method is effective in counterfactual simulation tests. We also collect subjective ratings of explanations, but we do not find that ratings are predictive of how helpful explanations are. Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability across a variety of explanation methods and data domains. We show that (1) we need to be careful about the metrics we use to evaluate explanation methods, and (2) there is significant room for improvement in current methods. All our supporting code, data, and models are publicly available at: https://github.com/peterbhase/InterpretableNLP-ACL2020

READ FULL TEXT
10/21/2022

The privacy issue of counterfactual explanations: explanation linkage attacks

Black-box machine learning models are being used in more and more high-s...
06/01/2021

Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals

Feature importance (FI) estimates are a popular form of explanation, and...
05/28/2023

Evaluating GPT-3 Generated Explanations for Hateful Content Moderation

Recent research has focused on using large language models (LLMs) to gen...
10/08/2020

Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?

Data collection for natural language (NL) understanding tasks has increa...
02/03/2021

When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data

Many methods now exist for conditioning model outputs on task instructio...
06/05/2022

Use-Case-Grounded Simulations for Explanation Evaluation

A growing body of research runs human subject evaluations to study wheth...

Code Repositories

InterpretableNLP-ACL2020

Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"


view repo