Considering Likelihood in NLP Classification Explanations with Occlusion and Language Modeling

04/21/2020
by   David Harbecke, et al.
0

Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions. Occlusion is a well established method that provides explanations on discrete language data, e.g. by removing a language unit from an input and measuring the impact on a model's decision. We argue that current occlusion-based methods often produce invalid or syntactically incorrect language data, neglecting the improved abilities of recent NLP models. Furthermore, gradient-based explanation methods disregard the discrete distribution of data in NLP. Thus, we propose OLM: a novel explanation method that combines occlusion and language models to sample valid and syntactically correct replacements with high likelihood, given the context of the original input. We lay out a theoretical foundation that alleviates these weaknesses of other explanation methods in NLP and provide results that underline the importance of considering data likelihood in occlusion-based explanation.

READ FULL TEXT
research
01/28/2021

Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling

Deep neural networks are powerful statistical learners. However, their p...
research
06/01/2023

TopEx: Topic-based Explanations for Model Comparison

Meaningfully comparing language models is challenging with current expla...
research
06/09/2021

On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation

In the recent advances of natural language processing, the scale of the ...
research
01/27/2022

Human Interpretation of Saliency-based Explanation Over Text

While a lot of research in explainable AI focuses on producing effective...
research
02/15/2022

Contextual Importance and Utility: aTheoretical Foundation

This paper provides new theory to support to the eXplainable AI (XAI) me...
research
10/24/2020

Measuring Association Between Labels and Free-Text Rationales

Interpretable NLP has taking increasing interest in ensuring that explan...
research
06/28/2023

Increasing Performance And Sample Efficiency With Model-agnostic Interactive Feature Attributions

Model-agnostic feature attributions can provide local insights in comple...

Please sign up or login with your details

Forgot password? Click here to reset