DeepAI AI Chat
Log In Sign Up

Order in the Court: Explainable AI Methods Prone to Disagreement

05/07/2021
by   Michael Neely, et al.
14

In Natural Language Processing, feature-additive explanation methods quantify the independent contribution of each input token towards a model's decision. By computing the rank correlation between attention weights and the scores produced by a small sample of these methods, previous analyses have sought to either invalidate or support the role of attention-based explanations as a faithful and plausible measure of salience. To investigate what measures of rank correlation can reliably conclude, we comprehensively compare feature-additive methods, including attention-based explanations, across several neural architectures and tasks. In most cases, we find that none of our chosen methods agree. Therefore, we argue that rank correlation is largely uninformative and does not measure the quality of feature-additive methods. Additionally, the range of conclusions a practitioner may draw from a single explainability algorithm are limited.

READ FULL TEXT
01/28/2022

Rethinking Attention-Model Explainability through Faithfulness Violation Test

Attention mechanisms are dominating the explainability of deep models. T...
03/27/2019

iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models

Explainable Artificial Intelligence (XAI) brings a lot of attention rece...
02/28/2023

Multi-Layer Attention-Based Explainability via Transformers for Tabular Data

We propose a graph-oriented attention-based explainability method for ta...
10/13/2022

How (Not) To Evaluate Explanation Quality

The importance of explainability is increasingly acknowledged in natural...
02/16/2015

Explaining robust additive utility models by sequences of preference swaps

Multicriteria decision analysis aims at supporting a person facing a dec...
09/14/2020

SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition

Explainable artificial intelligence is gaining attention. However, most ...